1. Energy Minimization with Label Costsand Model Fitting presented by Yuri Boykov co-authors: Andrew Delong Anton OsokinHossamIsack
2.
3. Reconstruction in Vision: (a basic example) I L observed noisy image I image labeling L (restored intensities) I= { I1, I2 , ... , In } L= { L1, L2 , ... , Ln} How to compute L from I ?
4. Energy minimization(discrete approach) MRF framework weak membrane model(Geman&Geman’84, Blake&Zisserman’83,87) data fidelity spatial regularization discontinuity preserving potentials Blake&Zisserman’83,87
5. Optimization Convex regularization gradient descent works exact polynomial algorithms TV regularization a bit harder (non-differentiable) global minima algorithms (Ishikawa, Hochbaum, Nikolova et al.) Robust regularization NP-hard, many local minima good approximations (message passing, a-expansion)
6. Potts model(piece-wise constant labeling) Robust regularization NP-hard, many local minima provably good approximations (a-expansion) maxflow/mincut combinatorial algorithms
7. Right eye image Left eye image Potts model(piece-wise constant labeling) depth layers Robust regularization NP-hard, many local minima provably good approximations (a-expansion) maxflow/mincut combinatorial algorithms
8. Potts model(piece-wise constant labeling) C Robust regularization NP-hard, many local minima provably good approximations (a-expansion) maxflow/mincut combinatorial algorithms
9. Adding label costs Lippert[PAMI 89] MDL framework, annealing Zhu and Yuille[PAMI 96] continuous formulation (gradient des cent ) H. Li [CVPR 2007] AIC/BIC framework, only 1st and 3rd terms LP relaxation (no guarantees approximation) Our new work [CVPR 2010] , extended a-expansion all 3 terms, 3rd term is represented as some high-order clique), optimality bound very fast heuristics for 1st & 3rd term (facility location problem, 60-es) - set of labels allowed at each point p
12. many outliers quadratic errors fail use more robust error measures, e.g. gives “MEDIAN” line - more expensive computations (non-differentiable) - still fails if outliers exceed 50% RANSAC
14. many outliers sample randomly two points, get a line 2. count inliers for threshold T 10 inliers RANSAC
15. many outliers sample randomly two points, get a line 2. count inliers for threshold T 30 inliers 3. repeat N times and select model with most inliers RANSAC
17. Multiple models and many outliers Higher noise Why not RANSAC again? In general, maximization of inliers does not work for outliers + multiple models
18. Energy-based approach energy-based interpretation of RANSACcriteria for single model fitting: - find optimal labelL for one very specific error measure
19. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln} Need regularization!
20. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln}
21. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln} - set of labels allowed at each point p
22. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln} Practical problem: number of potential labels (models) is huge, how are we going to use a-expansion?
24. PEARL Propose Expand And Reestimate Labels sample data to generate a finite set of initial labels data points + randomly sampled models
25. PEARL Propose Expand And Reestimate Labels a-expansion: minimize E(L) segmentation for fixed set of labels models and inliers (labeling L)
26. PEARL Propose Expand And Reestimate Labels reestimating labels in for given inliers minimizes first term of energy E(L) models and inliers (labeling L)
27. PEARL Propose Expand And Reestimate Labels a-expansion: minimize E(L) segmentation for fixed set of labels models and inliers (labeling L)
28. PEARL Propose Expand And Reestimate Labels after 5 iterations iterate until convergence
29. PEARL can significantlyimprove initial models single line fitting with 80% outliers deviation (from ground truth ) number of initial samples
36. Comparison formulti-model fitting High noise Hough transform Finding modes in Hough-space, e.g. via mean-shift (also maximizes the number of inliers)
62. Affine model fitting(from a rectified stereo pair) photoconsistency + smoothness dense model assignments to pixels Birchfield & Tomasi’99 (fit initial models to output of other stereo algorithm + + α-expansion + reestimation) geometric errors + smoothness+ label cost sparse model assignments to features PEARL (sample data + α-expansion + reestimation)
63. Duh...use right geometric error measure!!! “disparity” errors d1 and d2 (bad idea!) “quatient”-based errors d (standard)
64. Affine model fitting(from a rectified stereo pair) photoconsistency + smoothness dense model assignments to pixels Birchfield & Tomasi’99 (fit initial models to output of other stereo algorithm + + α-expansion + reestimation) geometric errors + smoothness+ label cost sparse model assignments to features PEARL (sample data + α-expansion + reestimation)