SlideShare a Scribd company logo
1 of 62
Björn Stenger 28 Sep 2009 2009   京都 Tutorial – Part 3 Tracking Using Classification and Online Learning
Roadmap Tracking by classification On-line Boosting Multiple Instance Learning Multi-Classifier Boosting Online Feature Selection Adaptive Trees Ensemble Tracking Online Random Forest Combining off-line & on-line Tracking by optimization
Tracking by Optimization Example: Mean shift tracking Given: target location in frame  t , color distribution In frame  t+1 : Minimize distance  p  candidate distribution q  target distribution y  location Mean shift: iterative optimization Finds local optimum Extension: downweight by background [Comaniciu et al. 00]
Support Vector Tracking [Avidan 01] ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Displacement Expert Tracking [Williams et al. 03] Learn a nonlinear mapping from  images  I  to displacements  δ u . Off-line training On-line tracking
Displacement Expert Tracking (2) [Williams et al. 03] Results on 136 frames of face sequence (no scale changes)
Online Selection of Discriminative Features [Collins et al. 03] Select features that best discriminate between object and background Feature pool:  Discriminative score:  measure separability (variance ratio) of fg/bg Within class variance should be small Total variance should be large
On-line Feature Selection (2) Input image: Feature ranking according to variance ratio [Collins et al. 03] Mean shift Mean shift Mean shift Median New location Combining estimates
Ensemble Tracking [Avidan 05] Use classifiers to distinguish object from background Image Feature space foreground background First location is provided manually All pixels are training data labeled {+1,-1} 11-dimensional feature vector 8 orientation histogram of 5x5 nhood 3 RGB values
Ensemble Tracking [Avidan 05] Confidence map Train  T (=5)  weak linear classifiers  h :  Combine into strong Classifier with AdaBoost Build confidence map from classifier margins  Scale positive margin to [0,1]  Mean shift Find the mode using mean shift  Feature space foreground background
Ensemble Tracking Update [Avidan 05] Test examples  x i  using strong classifier  H ( x ) For each new frame I j   Run mean shift on confidence map Obtain new pixel labels y Keep  K  (=4) best (lowest error) weak classifiers Update their weights Train  T-K  (=1) new weak classifiers h 1 h 2 h 3 h 4 h 5
Ensemble Tracking Properties [Avidan 05] ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
AdaBoost (recap) [Freund, Schapire 97] Input ,[object Object],[object Object],for n=1 to N  // number of weak classifiers ,[object Object],[object Object],[object Object],[object Object],end Algorithm Result [slide credit H. Grabner] Feature space
AdaBoost (recap) [Freund, Schapire 97] h 2 h 3 h 4 h 1 H Training examples Weighted combination … ( x 1 , y 1 ) ( x N , y N ) 1/2 1/2
From Off-line to On-line Boosting [Oza, Russel 01] Input Input ,[object Object],[object Object],[object Object],[object Object],[object Object],For n=1 to N ,[object Object],[object Object],[object Object],[object Object],End For n=1 to N ,[object Object],[object Object],[object Object],[object Object],End Algorithm Algorithm Off-line On-line [slide credit H. Grabner]
Online Boosting [Oza, Russell 01] Input ,[object Object],[object Object],for n=1 to N  // number of weak classifiers ,[object Object],[object Object],[object Object],[object Object],end Algorithm Result Feature space ,[object Object],[slide credit H. Grabner]
Online Boosting h 2 h 3 h 4 t h 1 ( x 1 , y 1 ) [Oza, Russell 01] ( x 2 , y 2 ) ( x 3 , y 3 ) ( x 4 , y 4 ) H Weighted combination Training example …
Convergence results [Oza 01] ,[object Object],[object Object],[object Object],[object Object],[object Object]
Priming can help [Oza 01] Batch learning on first 200 points, then online
Online Boosting for Feature Selection [Grabner, Bischof 06] Each feature corresponds to a weak classifier  Combination of simple features
Selectors [Grabner, Bischof 06] A selector chooses  one  feature/classifier from pool. Selectors can be seen as classifiers Classifier pool Idea:  Perform boosting on selectors, not the features directly.
Online Feature Selection one sample Init importance Estimate errors Select best weak classifier Update weight Estimate importance Current strong classifier For each training sample [Grabner, Bischof 06] Global classifier pool Estimate errors Select best weak classifier Update weight Estimate errors Select best weak classifier Update weight Estimate importance
Tracking Principle [Grabner, Bischof 06] [slide credit H. Grabner]
Adaptive Tracking [Grabner, Bischof 06]
Limitations [Grabner, Bischof 06]
Multiple Instance Learning (MIL) ,[object Object],[object Object],[object Object],[object Object],[object Object],[Keeler  et al. 90, Dietterich et al. 97, Viola et al. 05]
Multiple Instance Learning Supervised learning training input MIL training input [Babenko et al. 09] Classifier MIL Classifier
Online MIL Boost [Babenko et al. 09] ,[object Object],[object Object],[object Object],pool of  weak classifier candidates
Online MIL Boost Frame   t Frame   t+ 1 Get data (bags) Update all classifiers in pool Greedily add best  K  to strong classifier [Babenko et al. 09]
Tracking Results [Babenko et al. 09]
On-line / Off-line Spectrum Tracking Detection General object/any background detector Fixed training set Object/Background classifier On-line update Adaptive detector Tracking with prior c/f Template Update Problem [Matthews et al. 04] ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Semi-supervised Use labeled data as prior Estimate labels & sample importance for unlabeled data [Grabner et al. 08]
Tracking Results [Grabner et al. 08]
Tracking Results [Grabner et al. 08]
Beyond Semi-Supervised [Stalder et al. 09] Recognizer Object specific “ Adaptive prior” Updated by: pos: Tracked samples validated by detector neg: Background during detection “ too inflexible”
Results [Stalder et al. 09]
Results [Stalder et al. 09]
Task: Tracking a Fist
Learning to Track with Multiple Observers Observation  Models Off-line training of observer  combinations Optimal tracker for task at hand Labeled Training Data Idea: Learn optimal combination of observers (trackers) in an off-line training stage.   Each tracker can be fixed or adaptive. Given: labeled training data, object detector [Stenger et al. 09]
Input: Set of observers Each returns a location estimate & confidence value [Stenger et al. 09] [OB] [LDA] [BLDA] [OFS] [MS] [C] [M] [CM] [BOF] [KLT] [FF] [RT] [NCC] [SAD] On-line boosting Linear Discriminant Analysis (LDA) Boosted LDA On-line feature selection On-line classifiers Color-based mean shift Color probability Motion probability Color and motion probability Histogram Block-based optical flow Kanade-Lucas-Tomasi Flocks of features Randomized templates Local features Normalized cross-correlation Sum of absolute differences Single template
Combination Schemes Find good combinations of observers automatically by evaluating all pairs/triplets (using 2 different schemes). 1) 2) [Stenger et al. 09]
How to Measure Performance? Run each tracker on  all  frames (don’t stop after first failure) Measure position error Loss of track when error above threshold Re-init with detector [Stenger et al. 09]
Results on Hand Data (Single Observers) [Stenger et al. 09]
Results on Hand Data Single observers Pairs of observers [Stenger et al. 09]
Tracking Results [Stenger et al. 09]
Face Tracking Results [Stenger et al. 09]
Multi-Classifier Boosting ,[object Object],[Kim et al. 09] AdaBoost Multi-class boosting with gating function
Online Multi-Class Boosting [Kim et al. 09] ,[object Object]
And now Trees
Online Adaptive Decision Trees [Basak 04] Sigmoidal soft partitioning function at each node hyperplane Activation value at node i ,[object Object],[object Object]
Adaptive Vocabulary Forests [Yeh et al. 07] ,[object Object],[object Object],[slide credit T. Yeh]
Incremental Building of Vocabulary Tree [Yeh et al. 07]
Tree Growing by Splitting Leaf Nodes [Yeh et al. 07]
Tree Adaptation with Re-Clustering [Yeh et al. 07] Identify affected neighborhood Remove exisiting boundaries Re-Cluster points
Accuracy drops when Adaptation is stopped [Yeh et al. 07] Recent accuracy T =100 R(j)  = 1   if top ranked retrieved image belongs to same group
Tree Pruning [Yeh et al. 07] ,[object Object],[object Object],[object Object]
On-line Random Forests [Saffari et al. 09] For each tree  t Input: New training example Update tree  t   with  k   times Estimate Out-of-bag error end P(Discard tree  t  and insert new one) =  Random forest …
Leaf Update and Split [Saffari et al. 09] Set of random split functions ,[object Object],[object Object],[object Object],class k Compute gain of each potential split function Current leaf node
Results [Saffari et al. 09] Convergence of  on-line RF classification  to  batch solution  on USPS data set Tracking error of  online RF  compared to  online boosting
Conclusions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
References Avidan, S., Support Vector Tracking,  IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, 2001. Avidan ,  S., Support Vector Tracking , IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 26(8), pp. 1064--1072, 2004. Avidan, S., Ensemble Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 29(2), pp 261-271, 2007. Avidan, S., Ensemble Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, USA, 2005. Babenko, B., Yang, M.-H., Belongie, S., Visual Tracking with Online Multiple Instance Learning, Proc. CVPR 2009. Basak, J., Online adaptive decision trees,  Neural Computation, v.16 n.9, p.1959-1981, September 2004. Collins, R. T.,  Liu, Y., Leordeanu, M., On-Line Selection of Discriminative Tracking Features, IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), Vol 27(10), October 2005, pp.1631-1643.  Collins, R. T., Liu, Y., On-Line Selection of Discriminative Tracking Features, Proceedings of the 2003 International Conference of Computer Vision (ICCV '03), October, 2003, pp. 346 - 352.  Comaniciu, D., Ramesh, V., Meer, P., Kernel-Based Object Tracking,  IEEE Trans. Pattern Analysis Machine Intell., Vol. 25, No. 5, 564-575, 2003. Comaniciu, D., Ramesh, V., Meer P., Real-Time Tracking of Non-Rigid Objects using Mean Shift,  IEEE Conf. Computer Vision and Pattern Recognition, Hilton Head Island, South Carolina, Vol. 2, 142-149, 2000. T. G. Dietterich and R. H. Lathrop and T. Lozano-Perez, Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence   89  31-71, 1997. Freund, Y. , Schapire, R. E. , A decision-theoretic generalization of on-line learning and an application to boosting.  Journal of Computer and System Sciences, 55(1):119–139, August 1997. H. Grabner, C. Leistner, and H. Bischof, Semi-supervised On-line Boosting for Robust Tracking. In Proceedings European Conference on Computer Vision (ECCV), 2008. H. Grabner, P. M. Roth, H. Bischof, Eigenboosting: Combining Discriminative and Generative Information,  IEEE Conference on Computer Vision and Pattern Recognition, 2007. H. Grabner, M. Grabner, and H. Bischof, Real-time Tracking via On-line Boosting, In Proceedings British Machine Vision Conference (BMVC), volume 1, pages 47-56, 2006. H. Grabner, and H. Bischof, On-line Boosting and Vision, In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 260-267, 2006. J. D. Keeler , D. E. Rumelhart , W.-K. Leow,  Integrated segmentation and recognition of hand-printed numerals,  Proc. 1990 NIPS 3, p.557-563, October 1990, Denver, Colorado, USA.   T.-K. Kim and R. Cipolla,  MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features,  In Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, Dec. 2008. T-K. Kim, T. Woodley, B. Stenger, R. Cipolla,  Online Multiple Classifier Boosting for Object Tracking,  CUED/F-INFENG/TR631, Department of Engineering, University of Cambridge, June 2009.
Y. Li, H. Ai, S. Lao, M. Kawade,  Tracking in Low Frame Rate Video: A Cascade Particle Filter with Discriminative Observers of Different Lifespans,  Proc. CVPR, 2007. I. Matthews, T. Ishikawa, and S. Baker, The template update problem.  In Proc. BMVC, 2003 I. Matthews, T. Ishikawa, and S. Baker, The Template Update Problem, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 6, June, 2004, pp. 810 - 815. K. Okuma, A. Taleghani, N. De Freitas, J. Little, D. G. Lowe,  A Boosted Particle Filter: Multitarget Detection and Tracking , European Conference on Computer Vision(ECCV), May 2004. Oza, N.C., Online Ensemble Learning,  Ph.D. thesis, University of California, Berkeley. Oza, N.C. and Russell, S.,  Online Bagging and Boosting. In Eighth Int. Workshop on Artificial Intelligence and Statistics, pp. 105–112, Key West, FL, USA, January 2001.  Oza, N.C. and Russell, S.,  Experimental Comparisons of Online and Batch Versions of Bagging and Boosting,  The Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, 2001. Saffari, A., Leistner C., Santner J., Godec M., Bischof H., On-line Random Forests,  3rd IEEE ICCV Workshop on On-line Computer Vision, 2009. S. Stalder, H. Grabner, and L. Van Gool, Beyond Semi-Supervised Tracking: Tracking Should Be as Simple as Detection, but not Simpler than Recognition. In Proceedings ICCV’09 WS on On-line Learning for Computer Vision, 2009. B. Stenger, T. Woodley, R. Cipolla, Learning to Track With Multiple Observers. Proc. CVPR, Miami, June 2009.  References & Code P. A. Viola and J. Platt and C. Zhang, Multiple instance boosting for object detection, Proceedings of NIPS  2005. O. Williams, A. Blake, and R. Cipolla,  Sparse Bayesian Regression for Efficient Visual Tracking,  in IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, August 2005. O. Williams, A. Blake, and R. Cipolla,  A Sparse Probabilistic Learning Algorithm for Real-Time Tracking, in Proceedings of the Ninth IEEE International Conference on Computer Vision, October 2003.   T. Woodley, B. Stenger, R. Cipolla, Tracking Using Online Feature Selection and a Local Generative Model, Proc. BMVC, Warwick, September 2007. T. Yeh, J. Lee, and T. Darrell, Adaptive Vocabulary Forests for Dynamic Indexing and Category Learning. Proc. ICCV 2007.  Code: Severin Stalder, Helmut Grabner Online Boosting, Semi-supervised Online Boosting, Beyond Semi-Supervised Online Boosting http://www.vision.ee.ethz.ch/boostingTrackers/index.htm Boris Babenko MIL Track http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml Amir Saffari http://www.ymer.org/amir/software/online-random-forests/

More Related Content

What's hot

Artificial Intelligence
Artificial Intelligence Artificial Intelligence
Artificial Intelligence
butest
 
Instance based learning
Instance based learningInstance based learning
Instance based learning
Slideshare
 
Adversarial Reinforced Learning for Unsupervised Domain Adaptation
Adversarial Reinforced Learning for Unsupervised Domain AdaptationAdversarial Reinforced Learning for Unsupervised Domain Adaptation
Adversarial Reinforced Learning for Unsupervised Domain Adaptation
taeseon ryu
 

What's hot (20)

07 learning
07 learning07 learning
07 learning
 
Ml7 bagging
Ml7 baggingMl7 bagging
Ml7 bagging
 
Ml9 introduction to-unsupervised_learning_and_clustering_methods
Ml9 introduction to-unsupervised_learning_and_clustering_methodsMl9 introduction to-unsupervised_learning_and_clustering_methods
Ml9 introduction to-unsupervised_learning_and_clustering_methods
 
02 image processing
02 image processing02 image processing
02 image processing
 
Introduction to Boosted Trees by Tianqi Chen
Introduction to Boosted Trees by Tianqi ChenIntroduction to Boosted Trees by Tianqi Chen
Introduction to Boosted Trees by Tianqi Chen
 
Ml10 dimensionality reduction-and_advanced_topics
Ml10 dimensionality reduction-and_advanced_topicsMl10 dimensionality reduction-and_advanced_topics
Ml10 dimensionality reduction-and_advanced_topics
 
Matrix decomposition and_applications_to_nlp
Matrix decomposition and_applications_to_nlpMatrix decomposition and_applications_to_nlp
Matrix decomposition and_applications_to_nlp
 
Artificial Intelligence
Artificial Intelligence Artificial Intelligence
Artificial Intelligence
 
Instance based learning
Instance based learningInstance based learning
Instance based learning
 
Multiclass Recognition with Multiple Feature Trees
Multiclass Recognition with Multiple Feature TreesMulticlass Recognition with Multiple Feature Trees
Multiclass Recognition with Multiple Feature Trees
 
Review : Adaptive Consistency Regularization for Semi-Supervised Transfer Lea...
Review : Adaptive Consistency Regularization for Semi-Supervised Transfer Lea...Review : Adaptive Consistency Regularization for Semi-Supervised Transfer Lea...
Review : Adaptive Consistency Regularization for Semi-Supervised Transfer Lea...
 
assia2015sakai
assia2015sakaiassia2015sakai
assia2015sakai
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
 
Ppt shuai
Ppt shuaiPpt shuai
Ppt shuai
 
Adversarial Reinforced Learning for Unsupervised Domain Adaptation
Adversarial Reinforced Learning for Unsupervised Domain AdaptationAdversarial Reinforced Learning for Unsupervised Domain Adaptation
Adversarial Reinforced Learning for Unsupervised Domain Adaptation
 
2014-06-20 Multinomial Logistic Regression with Apache Spark
2014-06-20 Multinomial Logistic Regression with Apache Spark2014-06-20 Multinomial Logistic Regression with Apache Spark
2014-06-20 Multinomial Logistic Regression with Apache Spark
 
3.2 partitioning methods
3.2 partitioning methods3.2 partitioning methods
3.2 partitioning methods
 
Graph Matching Unsupervised Domain Adaptation
Graph Matching Unsupervised Domain Adaptation Graph Matching Unsupervised Domain Adaptation
Graph Matching Unsupervised Domain Adaptation
 
Gradient Boosted Regression Trees in scikit-learn
Gradient Boosted Regression Trees in scikit-learnGradient Boosted Regression Trees in scikit-learn
Gradient Boosted Regression Trees in scikit-learn
 
Machine learning Algorithms with a Sagemaker demo
Machine learning Algorithms with a Sagemaker demoMachine learning Algorithms with a Sagemaker demo
Machine learning Algorithms with a Sagemaker demo
 

Similar to iccv2009 tutorial: boosting and random forest - part III

Real-time Face Recognition & Detection Systems 1
Real-time Face Recognition & Detection Systems 1Real-time Face Recognition & Detection Systems 1
Real-time Face Recognition & Detection Systems 1
Suvadip Shome
 
Part 1
Part 1Part 1
Part 1
butest
 
Rohan's Masters presentation
Rohan's Masters presentationRohan's Masters presentation
Rohan's Masters presentation
rohan_anil
 
Data.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and predictionData.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and prediction
Margaret Wang
 
Face detection ppt by Batyrbek
Face detection ppt by Batyrbek Face detection ppt by Batyrbek
Face detection ppt by Batyrbek
Batyrbek Ryskhan
 
Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...
butest
 
probabilistic ranking
probabilistic rankingprobabilistic ranking
probabilistic ranking
FELIX75
 
Lec13 Clustering.pptx
Lec13 Clustering.pptxLec13 Clustering.pptx
Lec13 Clustering.pptx
Khalid Rabayah
 

Similar to iccv2009 tutorial: boosting and random forest - part III (20)

[ppt]
[ppt][ppt]
[ppt]
 
[ppt]
[ppt][ppt]
[ppt]
 
Shai Avidan's Support vector tracking and ensemble tracking
Shai Avidan's Support vector tracking and ensemble trackingShai Avidan's Support vector tracking and ensemble tracking
Shai Avidan's Support vector tracking and ensemble tracking
 
Real-time Face Recognition & Detection Systems 1
Real-time Face Recognition & Detection Systems 1Real-time Face Recognition & Detection Systems 1
Real-time Face Recognition & Detection Systems 1
 
Part 1
Part 1Part 1
Part 1
 
Rohan's Masters presentation
Rohan's Masters presentationRohan's Masters presentation
Rohan's Masters presentation
 
Data.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and predictionData.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and prediction
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
 
Face detection ppt by Batyrbek
Face detection ppt by Batyrbek Face detection ppt by Batyrbek
Face detection ppt by Batyrbek
 
Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...
 
probabilistic ranking
probabilistic rankingprobabilistic ranking
probabilistic ranking
 
Using HOG Descriptors on Superpixels for Human Detection of UAV Imagery
Using HOG Descriptors on Superpixels for Human Detection of UAV ImageryUsing HOG Descriptors on Superpixels for Human Detection of UAV Imagery
Using HOG Descriptors on Superpixels for Human Detection of UAV Imagery
 
Gradient Boosted Regression Trees in Scikit Learn by Gilles Louppe & Peter Pr...
Gradient Boosted Regression Trees in Scikit Learn by Gilles Louppe & Peter Pr...Gradient Boosted Regression Trees in Scikit Learn by Gilles Louppe & Peter Pr...
Gradient Boosted Regression Trees in Scikit Learn by Gilles Louppe & Peter Pr...
 
Recommendation algorithm using reinforcement learning
Recommendation algorithm using reinforcement learningRecommendation algorithm using reinforcement learning
Recommendation algorithm using reinforcement learning
 
Machine Learning Powered A/B Testing
Machine Learning Powered A/B TestingMachine Learning Powered A/B Testing
Machine Learning Powered A/B Testing
 
Text categorization
Text categorizationText categorization
Text categorization
 
Scalable machine learning
Scalable machine learningScalable machine learning
Scalable machine learning
 
Explanations in Data Systems
Explanations in Data SystemsExplanations in Data Systems
Explanations in Data Systems
 
Ala Stolpnik's Standard Model talk
Ala Stolpnik's Standard Model talkAla Stolpnik's Standard Model talk
Ala Stolpnik's Standard Model talk
 
Lec13 Clustering.pptx
Lec13 Clustering.pptxLec13 Clustering.pptx
Lec13 Clustering.pptx
 

More from zukun

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
zukun
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
zukun
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
zukun
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
zukun
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
zukun
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
zukun
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
zukun
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
zukun
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
zukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
zukun
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
zukun
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
zukun
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
zukun
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
zukun
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
zukun
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
zukun
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
zukun
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
zukun
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
zukun
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
zukun
 

More from zukun (20)

My lyn tutorial 2009
My lyn tutorial 2009My lyn tutorial 2009
My lyn tutorial 2009
 
ETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCVETHZ CV2012: Tutorial openCV
ETHZ CV2012: Tutorial openCV
 
ETHZ CV2012: Information
ETHZ CV2012: InformationETHZ CV2012: Information
ETHZ CV2012: Information
 
Siwei lyu: natural image statistics
Siwei lyu: natural image statisticsSiwei lyu: natural image statistics
Siwei lyu: natural image statistics
 
Lecture9 camera calibration
Lecture9 camera calibrationLecture9 camera calibration
Lecture9 camera calibration
 
Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Modern features-part-4-evaluation
Modern features-part-4-evaluationModern features-part-4-evaluation
Modern features-part-4-evaluation
 
Modern features-part-3-software
Modern features-part-3-softwareModern features-part-3-software
Modern features-part-3-software
 
Modern features-part-2-descriptors
Modern features-part-2-descriptorsModern features-part-2-descriptors
Modern features-part-2-descriptors
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Modern features-part-0-intro
Modern features-part-0-introModern features-part-0-intro
Modern features-part-0-intro
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Lecture 01 internet video search
Lecture 01 internet video searchLecture 01 internet video search
Lecture 01 internet video search
 
Lecture 03 internet video search
Lecture 03 internet video searchLecture 03 internet video search
Lecture 03 internet video search
 
Icml2012 tutorial representation_learning
Icml2012 tutorial representation_learningIcml2012 tutorial representation_learning
Icml2012 tutorial representation_learning
 
Advances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer visionAdvances in discrete energy minimisation for computer vision
Advances in discrete energy minimisation for computer vision
 
Gephi tutorial: quick start
Gephi tutorial: quick startGephi tutorial: quick start
Gephi tutorial: quick start
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
 
Object recognition with pictorial structures
Object recognition with pictorial structuresObject recognition with pictorial structures
Object recognition with pictorial structures
 
Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities Iccv2011 learning spatiotemporal graphs of human activities
Iccv2011 learning spatiotemporal graphs of human activities
 

Recently uploaded

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 

iccv2009 tutorial: boosting and random forest - part III

  • 1. Björn Stenger 28 Sep 2009 2009 京都 Tutorial – Part 3 Tracking Using Classification and Online Learning
  • 2. Roadmap Tracking by classification On-line Boosting Multiple Instance Learning Multi-Classifier Boosting Online Feature Selection Adaptive Trees Ensemble Tracking Online Random Forest Combining off-line & on-line Tracking by optimization
  • 3. Tracking by Optimization Example: Mean shift tracking Given: target location in frame t , color distribution In frame t+1 : Minimize distance p candidate distribution q target distribution y location Mean shift: iterative optimization Finds local optimum Extension: downweight by background [Comaniciu et al. 00]
  • 4.
  • 5. Displacement Expert Tracking [Williams et al. 03] Learn a nonlinear mapping from images I to displacements δ u . Off-line training On-line tracking
  • 6. Displacement Expert Tracking (2) [Williams et al. 03] Results on 136 frames of face sequence (no scale changes)
  • 7. Online Selection of Discriminative Features [Collins et al. 03] Select features that best discriminate between object and background Feature pool: Discriminative score: measure separability (variance ratio) of fg/bg Within class variance should be small Total variance should be large
  • 8. On-line Feature Selection (2) Input image: Feature ranking according to variance ratio [Collins et al. 03] Mean shift Mean shift Mean shift Median New location Combining estimates
  • 9. Ensemble Tracking [Avidan 05] Use classifiers to distinguish object from background Image Feature space foreground background First location is provided manually All pixels are training data labeled {+1,-1} 11-dimensional feature vector 8 orientation histogram of 5x5 nhood 3 RGB values
  • 10. Ensemble Tracking [Avidan 05] Confidence map Train T (=5) weak linear classifiers h : Combine into strong Classifier with AdaBoost Build confidence map from classifier margins Scale positive margin to [0,1] Mean shift Find the mode using mean shift Feature space foreground background
  • 11. Ensemble Tracking Update [Avidan 05] Test examples x i using strong classifier H ( x ) For each new frame I j Run mean shift on confidence map Obtain new pixel labels y Keep K (=4) best (lowest error) weak classifiers Update their weights Train T-K (=1) new weak classifiers h 1 h 2 h 3 h 4 h 5
  • 12.
  • 13.
  • 14. AdaBoost (recap) [Freund, Schapire 97] h 2 h 3 h 4 h 1 H Training examples Weighted combination … ( x 1 , y 1 ) ( x N , y N ) 1/2 1/2
  • 15.
  • 16.
  • 17. Online Boosting h 2 h 3 h 4 t h 1 ( x 1 , y 1 ) [Oza, Russell 01] ( x 2 , y 2 ) ( x 3 , y 3 ) ( x 4 , y 4 ) H Weighted combination Training example …
  • 18.
  • 19. Priming can help [Oza 01] Batch learning on first 200 points, then online
  • 20. Online Boosting for Feature Selection [Grabner, Bischof 06] Each feature corresponds to a weak classifier Combination of simple features
  • 21. Selectors [Grabner, Bischof 06] A selector chooses one feature/classifier from pool. Selectors can be seen as classifiers Classifier pool Idea: Perform boosting on selectors, not the features directly.
  • 22. Online Feature Selection one sample Init importance Estimate errors Select best weak classifier Update weight Estimate importance Current strong classifier For each training sample [Grabner, Bischof 06] Global classifier pool Estimate errors Select best weak classifier Update weight Estimate errors Select best weak classifier Update weight Estimate importance
  • 23. Tracking Principle [Grabner, Bischof 06] [slide credit H. Grabner]
  • 26.
  • 27. Multiple Instance Learning Supervised learning training input MIL training input [Babenko et al. 09] Classifier MIL Classifier
  • 28.
  • 29. Online MIL Boost Frame t Frame t+ 1 Get data (bags) Update all classifiers in pool Greedily add best K to strong classifier [Babenko et al. 09]
  • 31.
  • 32. Semi-supervised Use labeled data as prior Estimate labels & sample importance for unlabeled data [Grabner et al. 08]
  • 35. Beyond Semi-Supervised [Stalder et al. 09] Recognizer Object specific “ Adaptive prior” Updated by: pos: Tracked samples validated by detector neg: Background during detection “ too inflexible”
  • 39. Learning to Track with Multiple Observers Observation Models Off-line training of observer combinations Optimal tracker for task at hand Labeled Training Data Idea: Learn optimal combination of observers (trackers) in an off-line training stage. Each tracker can be fixed or adaptive. Given: labeled training data, object detector [Stenger et al. 09]
  • 40. Input: Set of observers Each returns a location estimate & confidence value [Stenger et al. 09] [OB] [LDA] [BLDA] [OFS] [MS] [C] [M] [CM] [BOF] [KLT] [FF] [RT] [NCC] [SAD] On-line boosting Linear Discriminant Analysis (LDA) Boosted LDA On-line feature selection On-line classifiers Color-based mean shift Color probability Motion probability Color and motion probability Histogram Block-based optical flow Kanade-Lucas-Tomasi Flocks of features Randomized templates Local features Normalized cross-correlation Sum of absolute differences Single template
  • 41. Combination Schemes Find good combinations of observers automatically by evaluating all pairs/triplets (using 2 different schemes). 1) 2) [Stenger et al. 09]
  • 42. How to Measure Performance? Run each tracker on all frames (don’t stop after first failure) Measure position error Loss of track when error above threshold Re-init with detector [Stenger et al. 09]
  • 43. Results on Hand Data (Single Observers) [Stenger et al. 09]
  • 44. Results on Hand Data Single observers Pairs of observers [Stenger et al. 09]
  • 46. Face Tracking Results [Stenger et al. 09]
  • 47.
  • 48.
  • 50.
  • 51.
  • 52. Incremental Building of Vocabulary Tree [Yeh et al. 07]
  • 53. Tree Growing by Splitting Leaf Nodes [Yeh et al. 07]
  • 54. Tree Adaptation with Re-Clustering [Yeh et al. 07] Identify affected neighborhood Remove exisiting boundaries Re-Cluster points
  • 55. Accuracy drops when Adaptation is stopped [Yeh et al. 07] Recent accuracy T =100 R(j) = 1 if top ranked retrieved image belongs to same group
  • 56.
  • 57. On-line Random Forests [Saffari et al. 09] For each tree t Input: New training example Update tree t with k times Estimate Out-of-bag error end P(Discard tree t and insert new one) = Random forest …
  • 58.
  • 59. Results [Saffari et al. 09] Convergence of on-line RF classification to batch solution on USPS data set Tracking error of online RF compared to online boosting
  • 60.
  • 61. References Avidan, S., Support Vector Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, 2001. Avidan , S., Support Vector Tracking , IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 26(8), pp. 1064--1072, 2004. Avidan, S., Ensemble Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 29(2), pp 261-271, 2007. Avidan, S., Ensemble Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, USA, 2005. Babenko, B., Yang, M.-H., Belongie, S., Visual Tracking with Online Multiple Instance Learning, Proc. CVPR 2009. Basak, J., Online adaptive decision trees, Neural Computation, v.16 n.9, p.1959-1981, September 2004. Collins, R. T., Liu, Y., Leordeanu, M., On-Line Selection of Discriminative Tracking Features, IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), Vol 27(10), October 2005, pp.1631-1643. Collins, R. T., Liu, Y., On-Line Selection of Discriminative Tracking Features, Proceedings of the 2003 International Conference of Computer Vision (ICCV '03), October, 2003, pp. 346 - 352. Comaniciu, D., Ramesh, V., Meer, P., Kernel-Based Object Tracking, IEEE Trans. Pattern Analysis Machine Intell., Vol. 25, No. 5, 564-575, 2003. Comaniciu, D., Ramesh, V., Meer P., Real-Time Tracking of Non-Rigid Objects using Mean Shift, IEEE Conf. Computer Vision and Pattern Recognition, Hilton Head Island, South Carolina, Vol. 2, 142-149, 2000. T. G. Dietterich and R. H. Lathrop and T. Lozano-Perez, Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence   89  31-71, 1997. Freund, Y. , Schapire, R. E. , A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997. H. Grabner, C. Leistner, and H. Bischof, Semi-supervised On-line Boosting for Robust Tracking. In Proceedings European Conference on Computer Vision (ECCV), 2008. H. Grabner, P. M. Roth, H. Bischof, Eigenboosting: Combining Discriminative and Generative Information, IEEE Conference on Computer Vision and Pattern Recognition, 2007. H. Grabner, M. Grabner, and H. Bischof, Real-time Tracking via On-line Boosting, In Proceedings British Machine Vision Conference (BMVC), volume 1, pages 47-56, 2006. H. Grabner, and H. Bischof, On-line Boosting and Vision, In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 260-267, 2006. J. D. Keeler , D. E. Rumelhart , W.-K. Leow, Integrated segmentation and recognition of hand-printed numerals, Proc. 1990 NIPS 3, p.557-563, October 1990, Denver, Colorado, USA. T.-K. Kim and R. Cipolla, MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features, In Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, Dec. 2008. T-K. Kim, T. Woodley, B. Stenger, R. Cipolla, Online Multiple Classifier Boosting for Object Tracking, CUED/F-INFENG/TR631, Department of Engineering, University of Cambridge, June 2009.
  • 62. Y. Li, H. Ai, S. Lao, M. Kawade, Tracking in Low Frame Rate Video: A Cascade Particle Filter with Discriminative Observers of Different Lifespans, Proc. CVPR, 2007. I. Matthews, T. Ishikawa, and S. Baker, The template update problem. In Proc. BMVC, 2003 I. Matthews, T. Ishikawa, and S. Baker, The Template Update Problem, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 6, June, 2004, pp. 810 - 815. K. Okuma, A. Taleghani, N. De Freitas, J. Little, D. G. Lowe, A Boosted Particle Filter: Multitarget Detection and Tracking , European Conference on Computer Vision(ECCV), May 2004. Oza, N.C., Online Ensemble Learning, Ph.D. thesis, University of California, Berkeley. Oza, N.C. and Russell, S., Online Bagging and Boosting. In Eighth Int. Workshop on Artificial Intelligence and Statistics, pp. 105–112, Key West, FL, USA, January 2001. Oza, N.C. and Russell, S., Experimental Comparisons of Online and Batch Versions of Bagging and Boosting, The Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, 2001. Saffari, A., Leistner C., Santner J., Godec M., Bischof H., On-line Random Forests, 3rd IEEE ICCV Workshop on On-line Computer Vision, 2009. S. Stalder, H. Grabner, and L. Van Gool, Beyond Semi-Supervised Tracking: Tracking Should Be as Simple as Detection, but not Simpler than Recognition. In Proceedings ICCV’09 WS on On-line Learning for Computer Vision, 2009. B. Stenger, T. Woodley, R. Cipolla, Learning to Track With Multiple Observers. Proc. CVPR, Miami, June 2009. References & Code P. A. Viola and J. Platt and C. Zhang, Multiple instance boosting for object detection, Proceedings of NIPS  2005. O. Williams, A. Blake, and R. Cipolla, Sparse Bayesian Regression for Efficient Visual Tracking, in IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, August 2005. O. Williams, A. Blake, and R. Cipolla, A Sparse Probabilistic Learning Algorithm for Real-Time Tracking, in Proceedings of the Ninth IEEE International Conference on Computer Vision, October 2003. T. Woodley, B. Stenger, R. Cipolla, Tracking Using Online Feature Selection and a Local Generative Model, Proc. BMVC, Warwick, September 2007. T. Yeh, J. Lee, and T. Darrell, Adaptive Vocabulary Forests for Dynamic Indexing and Category Learning. Proc. ICCV 2007. Code: Severin Stalder, Helmut Grabner Online Boosting, Semi-supervised Online Boosting, Beyond Semi-Supervised Online Boosting http://www.vision.ee.ethz.ch/boostingTrackers/index.htm Boris Babenko MIL Track http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml Amir Saffari http://www.ymer.org/amir/software/online-random-forests/