SlideShare una empresa de Scribd logo
1 de 54
Descargar para leer sin conexión
Inverse Problems
Regularization
www.numerical-tours.com
Gabriel Peyré
Overview
• Variational Priors
• Gradient Descent and PDE’s
• Inverse Problems Regularization
J(f) = ||f||2
W 1,2 =
Z
R2
||rf(x)||dxSobolev semi-norm:
Smooth and Cartoon Priors
| f|2
J(f) = ||f||2
W 1,2 =
Z
R2
||rf(x)||dxSobolev semi-norm:
Total variation semi-norm: J(f) = ||f||TV =
Z
R2
||rf(x)||dx
Smooth and Cartoon Priors
| f|| f|2
J(f) = ||f||2
W 1,2 =
Z
R2
||rf(x)||dxSobolev semi-norm:
Total variation semi-norm: J(f) = ||f||TV =
Z
R2
||rf(x)||dx
Smooth and Cartoon Priors
| f|| f|2
Natural Image Priors
Discrete Priors
Discrete Priors
Discrete Differential Operators
Discrete Differential Operators
Laplacian Operator
Laplacian Operator
Function: ˜f : x 2 R2
7! f(x) 2 R
˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2
R2 )
r ˜f(x) = (@1
˜f(x), @2
˜f(x)) 2 R2
Gradient: Images vs. Functionals
Function: ˜f : x 2 R2
7! f(x) 2 R
Discrete image: f 2 RN
, N = n2
f[i1, i2] = ˜f(i1/n, i2/n) rf[i] ⇡ r ˜f(i/n)
˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2
R2 )
r ˜f(x) = (@1
˜f(x), @2
˜f(x)) 2 R2
Gradient: Images vs. Functionals
Function: ˜f : x 2 R2
7! f(x) 2 R
Discrete image: f 2 RN
, N = n2
f[i1, i2] = ˜f(i1/n, i2/n)
Functional: J : f 2 RN
7! J(f) 2 R
J(f + ⌘) = J(f) + hrJ(f), ⌘iRN + O(||⌘||2
RN )
rf[i] ⇡ r ˜f(i/n)
˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2
R2 )
r ˜f(x) = (@1
˜f(x), @2
˜f(x)) 2 R2
rJ : RN
7! RN
Gradient: Images vs. Functionals
Function: ˜f : x 2 R2
7! f(x) 2 R
Discrete image: f 2 RN
, N = n2
f[i1, i2] = ˜f(i1/n, i2/n)
Functional: J : f 2 RN
7! J(f) 2 R
Sobolev:
rJ(f) = (r⇤
r)f = f
J(f) =
1
2
||rf||2
J(f + ⌘) = J(f) + hrJ(f), ⌘iRN + O(||⌘||2
RN )
rf[i] ⇡ r ˜f(i/n)
˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2
R2 )
r ˜f(x) = (@1
˜f(x), @2
˜f(x)) 2 R2
rJ : RN
7! RN
Gradient: Images vs. Functionals
rJ(f) = div
✓
rf
||rf||
◆
If 8 n, rf[n] 6= 0,
If 9n, rf[n] = 0, J not di↵erentiable at f.
Total Variation Gradient
||rf||
rJ(f)
rJ(f) = div
✓
rf
||rf||
◆
If 8 n, rf[n] 6= 0,
Sub-di↵erential:
If 9n, rf[n] = 0, J not di↵erentiable at f.
Cu = ↵ 2 R2⇥N
 (u[n] = 0) ) (↵[n] = u[n]/||u[n]||)
@J(f) = { div(↵) ; ||↵[n]|| 6 1 and ↵ 2 Crf }
Total Variation Gradient
||rf||
rJ(f)
−10 −8 −6 −4 −2 0 2 4 6 8 10
−2
0
2
4
6
8
10
12
−10 −8 −6 −4 −2 0 2 4 6 8 10
−2
0
2
4
6
8
10
12
p
x2 + "2
|x|
Regularized Total Variation
||u||" =
p
||u||2 + "2 J"(f) =
P
n ||rf[n]||"
−10 −8 −6 −4 −2 0 2 4 6 8 10
−2
0
2
4
6
8
10
12
−10 −8 −6 −4 −2 0 2 4 6 8 10
−2
0
2
4
6
8
10
12
rJ"(f) = div
✓
rf
||rf||"
◆
p
x2 + "2
|x|
rJ" ⇠ /" when " ! +1
Regularized Total Variation
||u||" =
p
||u||2 + "2 J"(f) =
P
n ||rf[n]||"
rJ"(f)
Overview
• Variational Priors
• Gradient Descent and PDE’s
• Inverse Problems Regularization
f(k+1)
= f(k)
⌧krJ(f(k)
) f(0)
is given.
Gradient Descent
and 0 < ⌧ < 2/L, then f(k) k!+1
! f?
a solution of min
f
J(f).
If f is convex, C1
, rf is L-Lipschitz,Theorem:
f(k+1)
= f(k)
⌧krJ(f(k)
) f(0)
is given.
Gradient Descent
and 0 < ⌧ < 2/L, then f(k) k!+1
! f?
a solution of min
f
J(f).
If f is convex, C1
, rf is L-Lipschitz,Theorem:
f(k+1)
= f(k)
⌧krJ(f(k)
) f(0)
is given.
Optimal step size: ⌧k = argmin
⌧2R+
J(f(k)
⌧rJ(f(k)
))
Proposition: One has
hrJ(f(k+1)
), rJ(f(k)
)i = 0
Gradient Descent
Gradient Flows and PDE’s
f(k+1)
f(k)
⌧
= rJ(f(k)
)Fixed step size ⌧k = ⌧:
Gradient Flows and PDE’s
f(k+1)
f(k)
⌧
= rJ(f(k)
)Fixed step size ⌧k = ⌧:
Denote ft = f(k)
for t = k⌧, one obtains formally as ⌧ ! 0:
8 t > 0,
@ft
@t
= rJ(ft) and f0 = f(0)
J(f) =
R
||rf(x)||dxSobolev flow:
@ft
@t
= ftHeat equation:
Explicit solution:
Gradient Flows and PDE’s
f(k+1)
f(k)
⌧
= rJ(f(k)
)Fixed step size ⌧k = ⌧:
Denote ft = f(k)
for t = k⌧, one obtains formally as ⌧ ! 0:
8 t > 0,
@ft
@t
= rJ(ft) and f0 = f(0)
Total Variation Flow
@ft
@t
= rJ(ft)
Noisy observations: y = f + w, w ⇠ N(0, IdN ).
and ft=0 = y
Application: Denoising
Optimal choice of t: minimize ||ft f||
! not accessible in practice.
SNR(ft, f) = 20 log10
✓
||f ft||
||f||
◆
Optimal Parameter Selection
t t
Overview
• Variational Priors
• Gradient Descent and PDE’s
• Inverse Problems Regularization
Inverse Problems
Inverse Problems
Inverse Problems
Inverse Problems
Inverse Problems
Inverse Problem Regularization
Inverse Problem Regularization
Inverse Problem Regularization
Sobolev prior: J(f) = 1
2 ||rf||2
f?
= argmin
f2RN
E(f) = ||y f||2
+ ||rf||2
(assuming 1 /2 ker( ))
Sobolev Regularization
Sobolev prior: J(f) = 1
2 ||rf||2
f?
= argmin
f2RN
E(f) = ||y f||2
+ ||rf||2
rE(f?
) = 0 () ( ⇤
)f?
= ⇤
yProposition:
! Large scale linear system.
(assuming 1 /2 ker( ))
Sobolev Regularization
Sobolev prior: J(f) = 1
2 ||rf||2
f?
= argmin
f2RN
E(f) = ||y f||2
+ ||rf||2
rE(f?
) = 0 () ( ⇤
)f?
= ⇤
yProposition:
! Large scale linear system.
Gradient descent:
(assuming 1 /2 ker( ))
where ||A|| = max(A)
! Slow convergence.
Sobolev Regularization
Convergence: ⇥ < 2/||⇥ ⇥ ||
Mask M, = diagi(1i2M )
Example: InpaintingFigure 3 shows iterations of the algorithm 1 to solve the inpainting problem
on a smooth image using a manifold prior with 2D linear patches, as defined in
16. This manifold together with the overlapping of the patches allow a smooth
interpolation of the missing pixels.
Measurements y Iter. #1 Iter. #3 Iter. #50
Fig. 3. Iterations of the inpainting algorithm on an uniformly regular image.
5 Manifold of Step Discontinuities
In order to introduce some non-linearity in the manifold M, one needs to go
log10(||f(k)
f( )
||/||f0||)
k k
E(f(k)
)
M
( f)[i] =
⇢
0 if i 2 M,
f[i] otherwise.
Symmetric linear system:
Conjugate Gradient
Ax = b () min
x2Rn
E(x) =
1
2
hAx, xi hx, bi
Symmetric linear system:
x(k+1)
= argmin E(x)
s.t. x x(k)
2 span(rE(x(0)
), . . . , rE(x(k)
))
Intuition:
Conjugate Gradient
Ax = b () min
x2Rn
E(x) =
1
2
hAx, xi hx, bi
Proposition: 8 ` < k, hrE(xk
), rE(x`
)i = 0
Symmetric linear system:
Initialization: x(0)
2 RN
, r(0)
= b Ax(0)
, p(0)
= r(0)
r(k)
=
hrE(x(k)
), d(k)
i
hAd(k), d(k)i
d(k)
= rE(x(k)
) +
||v(k)
||
||v(k 1)||
d(k 1)
v(k)
= rE(x(k)
) = Ax(k)
b
x(k+1)
= x(k)
r(k)
d(k)
Iterations:
x(k+1)
= argmin E(x)
s.t. x x(k)
2 span(rE(x(0)
), . . . , rE(x(k)
))
Intuition:
Conjugate Gradient
Ax = b () min
x2Rn
E(x) =
1
2
hAx, xi hx, bi
Proposition: 8 ` < k, hrE(xk
), rE(x`
)i = 0
TV" regularization: (assuming 1 /2 ker( ))
f?
= argmin
f2RN
E(f) =
1
2
|| f y|| + J"
(f)
Total Variation Regularization
||u||" =
p
||u||2 + "2 J"(f) =
P
n ||rf[n]||"
TV" regularization: (assuming 1 /2 ker( ))
f(k+1)
= f(k)
⌧krE(f(k)
)
rE(f) = ⇤
( f y) + rJ"(f)
rJ"(f) = div
✓
rf
||rf||"
◆
Convergence: requires ⌧ ⇠ ".
Gradient descent:
f?
= argmin
f2RN
E(f) =
1
2
|| f y|| + J"
(f)
Total Variation Regularization
||u||" =
p
||u||2 + "2 J"(f) =
P
n ||rf[n]||"
TV" regularization: (assuming 1 /2 ker( ))
f(k+1)
= f(k)
⌧krE(f(k)
)
rE(f) = ⇤
( f y) + rJ"(f)
rJ"(f) = div
✓
rf
||rf||"
◆
Convergence: requires ⌧ ⇠ ".
Gradient descent:
f?
= argmin
f2RN
E(f) =
1
2
|| f y|| + J"
(f)
Newton descent:
f(k+1)
= f(k)
H 1
k rE(f(k)
) where Hk = @2
E"(f(k)
)
Total Variation Regularization
||u||" =
p
||u||2 + "2 J"(f) =
P
n ||rf[n]||"
k
Large" TV vs. Sobolev ConvergeSmall"
Observations y Sobolev Total variation
Inpainting: Sobolev vs. TV
Noiseless problem: f?
2 argmin
f
J"
(f) s.t. f 2 H
Contraint: H = {f ; f = y}.
f(k+1)
= ProjH
⇣
f(k)
⌧krJ"(f(k)
)
⌘
ProjH(f) = argmin
g=y
||g f||2
= f + ⇤
( ⇤
) 1
(y f)
Inpainting: ProjH(f)[i] =
⇢
f[i] if i 2 M,
y[i] otherwise.
Projected gradient descent:
f(k) k!+1
! f?
a solution of (?).
(?)
Projected Gradient Descent
Proposition: If rJ" is L-Lipschitz and 0 < ⌧k < 2/L,
TV
Priors: Non-quadratic
better edge recovery.
=)
Conclusion
Sobolev
TV
Priors: Non-quadratic
better edge recovery.
=)
Optimization
Variational regularization:
()
– Gradient descent. – Newton.
– Projected gradient. – Conjugate gradient.
Non-smooth optimization ?
Conclusion
Sobolev

Más contenido relacionado

La actualidad más candente

Actuarial Science Reference Sheet
Actuarial Science Reference SheetActuarial Science Reference Sheet
Actuarial Science Reference Sheet
Daniel Nolan
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wed
Vin Voro
 

La actualidad más candente (20)

Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
 
Quantitative norm convergence of some ergodic averages
Quantitative norm convergence of some ergodic averagesQuantitative norm convergence of some ergodic averages
Quantitative norm convergence of some ergodic averages
 
Actuarial Science Reference Sheet
Actuarial Science Reference SheetActuarial Science Reference Sheet
Actuarial Science Reference Sheet
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wed
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
 
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheoryL. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
 
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
 

Similar a Signal Processing Course : Inverse Problems Regularization

HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...
HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...
HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...
irjes
 
Functions for Grade 10
Functions for Grade 10Functions for Grade 10
Functions for Grade 10
Boipelo Radebe
 
Laplace1 8merged
Laplace1 8mergedLaplace1 8merged
Laplace1 8merged
cohtran
 

Similar a Signal Processing Course : Inverse Problems Regularization (20)

Tcu12 crc multi
Tcu12 crc multiTcu12 crc multi
Tcu12 crc multi
 
Multilinear singular integrals with entangled structure
Multilinear singular integrals with entangled structureMultilinear singular integrals with entangled structure
Multilinear singular integrals with entangled structure
 
HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...
HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...
HARMONIC ANALYSIS ASSOCIATED WITH A GENERALIZED BESSEL-STRUVE OPERATOR ON THE...
 
A sharp nonlinear Hausdorff-Young inequality for small potentials
A sharp nonlinear Hausdorff-Young inequality for small potentialsA sharp nonlinear Hausdorff-Young inequality for small potentials
A sharp nonlinear Hausdorff-Young inequality for small potentials
 
Bregman Voronoi Diagrams (SODA 2007)
Bregman Voronoi Diagrams (SODA 2007)  Bregman Voronoi Diagrams (SODA 2007)
Bregman Voronoi Diagrams (SODA 2007)
 
Laplace transforms
Laplace transformsLaplace transforms
Laplace transforms
 
Wits Node Seminar: Dr Sunandan Gangopadhyay (NITheP Stellenbosch) TITLE: Path...
Wits Node Seminar: Dr Sunandan Gangopadhyay (NITheP Stellenbosch) TITLE: Path...Wits Node Seminar: Dr Sunandan Gangopadhyay (NITheP Stellenbosch) TITLE: Path...
Wits Node Seminar: Dr Sunandan Gangopadhyay (NITheP Stellenbosch) TITLE: Path...
 
Ecfft zk studyclub 9.9
Ecfft zk studyclub 9.9Ecfft zk studyclub 9.9
Ecfft zk studyclub 9.9
 
On maximal and variational Fourier restriction
On maximal and variational Fourier restrictionOn maximal and variational Fourier restriction
On maximal and variational Fourier restriction
 
1531 fourier series- integrals and trans
1531 fourier series- integrals and trans1531 fourier series- integrals and trans
1531 fourier series- integrals and trans
 
Functions for Grade 10
Functions for Grade 10Functions for Grade 10
Functions for Grade 10
 
Transformations computer graphics
Transformations computer graphics Transformations computer graphics
Transformations computer graphics
 
Fourier series Introduction
Fourier series IntroductionFourier series Introduction
Fourier series Introduction
 
Laplace1 8merged
Laplace1 8mergedLaplace1 8merged
Laplace1 8merged
 
summary.pdf
summary.pdfsummary.pdf
summary.pdf
 
1523 double integrals
1523 double integrals1523 double integrals
1523 double integrals
 
cmftJYeZhuanTalk.pdf
cmftJYeZhuanTalk.pdfcmftJYeZhuanTalk.pdf
cmftJYeZhuanTalk.pdf
 
Adaline and Madaline.ppt
Adaline and Madaline.pptAdaline and Madaline.ppt
Adaline and Madaline.ppt
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
 
On Twisted Paraproducts and some other Multilinear Singular Integrals
On Twisted Paraproducts and some other Multilinear Singular IntegralsOn Twisted Paraproducts and some other Multilinear Singular Integrals
On Twisted Paraproducts and some other Multilinear Singular Integrals
 

Más de Gabriel Peyré

Más de Gabriel Peyré (16)

Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
 

Último

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 

Último (20)

PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Third Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxThird Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 

Signal Processing Course : Inverse Problems Regularization

  • 2. Overview • Variational Priors • Gradient Descent and PDE’s • Inverse Problems Regularization
  • 3. J(f) = ||f||2 W 1,2 = Z R2 ||rf(x)||dxSobolev semi-norm: Smooth and Cartoon Priors | f|2
  • 4. J(f) = ||f||2 W 1,2 = Z R2 ||rf(x)||dxSobolev semi-norm: Total variation semi-norm: J(f) = ||f||TV = Z R2 ||rf(x)||dx Smooth and Cartoon Priors | f|| f|2
  • 5. J(f) = ||f||2 W 1,2 = Z R2 ||rf(x)||dxSobolev semi-norm: Total variation semi-norm: J(f) = ||f||TV = Z R2 ||rf(x)||dx Smooth and Cartoon Priors | f|| f|2
  • 13. Function: ˜f : x 2 R2 7! f(x) 2 R ˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜f(x) = (@1 ˜f(x), @2 ˜f(x)) 2 R2 Gradient: Images vs. Functionals
  • 14. Function: ˜f : x 2 R2 7! f(x) 2 R Discrete image: f 2 RN , N = n2 f[i1, i2] = ˜f(i1/n, i2/n) rf[i] ⇡ r ˜f(i/n) ˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜f(x) = (@1 ˜f(x), @2 ˜f(x)) 2 R2 Gradient: Images vs. Functionals
  • 15. Function: ˜f : x 2 R2 7! f(x) 2 R Discrete image: f 2 RN , N = n2 f[i1, i2] = ˜f(i1/n, i2/n) Functional: J : f 2 RN 7! J(f) 2 R J(f + ⌘) = J(f) + hrJ(f), ⌘iRN + O(||⌘||2 RN ) rf[i] ⇡ r ˜f(i/n) ˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜f(x) = (@1 ˜f(x), @2 ˜f(x)) 2 R2 rJ : RN 7! RN Gradient: Images vs. Functionals
  • 16. Function: ˜f : x 2 R2 7! f(x) 2 R Discrete image: f 2 RN , N = n2 f[i1, i2] = ˜f(i1/n, i2/n) Functional: J : f 2 RN 7! J(f) 2 R Sobolev: rJ(f) = (r⇤ r)f = f J(f) = 1 2 ||rf||2 J(f + ⌘) = J(f) + hrJ(f), ⌘iRN + O(||⌘||2 RN ) rf[i] ⇡ r ˜f(i/n) ˜f(x + ") = ˜f(x) + hrf(x), "iR2 + O(||"||2 R2 ) r ˜f(x) = (@1 ˜f(x), @2 ˜f(x)) 2 R2 rJ : RN 7! RN Gradient: Images vs. Functionals
  • 17. rJ(f) = div ✓ rf ||rf|| ◆ If 8 n, rf[n] 6= 0, If 9n, rf[n] = 0, J not di↵erentiable at f. Total Variation Gradient ||rf|| rJ(f)
  • 18. rJ(f) = div ✓ rf ||rf|| ◆ If 8 n, rf[n] 6= 0, Sub-di↵erential: If 9n, rf[n] = 0, J not di↵erentiable at f. Cu = ↵ 2 R2⇥N (u[n] = 0) ) (↵[n] = u[n]/||u[n]||) @J(f) = { div(↵) ; ||↵[n]|| 6 1 and ↵ 2 Crf } Total Variation Gradient ||rf|| rJ(f)
  • 19. −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 p x2 + "2 |x| Regularized Total Variation ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"
  • 20. −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 0 2 4 6 8 10 12 rJ"(f) = div ✓ rf ||rf||" ◆ p x2 + "2 |x| rJ" ⇠ /" when " ! +1 Regularized Total Variation ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||" rJ"(f)
  • 21. Overview • Variational Priors • Gradient Descent and PDE’s • Inverse Problems Regularization
  • 22. f(k+1) = f(k) ⌧krJ(f(k) ) f(0) is given. Gradient Descent
  • 23. and 0 < ⌧ < 2/L, then f(k) k!+1 ! f? a solution of min f J(f). If f is convex, C1 , rf is L-Lipschitz,Theorem: f(k+1) = f(k) ⌧krJ(f(k) ) f(0) is given. Gradient Descent
  • 24. and 0 < ⌧ < 2/L, then f(k) k!+1 ! f? a solution of min f J(f). If f is convex, C1 , rf is L-Lipschitz,Theorem: f(k+1) = f(k) ⌧krJ(f(k) ) f(0) is given. Optimal step size: ⌧k = argmin ⌧2R+ J(f(k) ⌧rJ(f(k) )) Proposition: One has hrJ(f(k+1) ), rJ(f(k) )i = 0 Gradient Descent
  • 25. Gradient Flows and PDE’s f(k+1) f(k) ⌧ = rJ(f(k) )Fixed step size ⌧k = ⌧:
  • 26. Gradient Flows and PDE’s f(k+1) f(k) ⌧ = rJ(f(k) )Fixed step size ⌧k = ⌧: Denote ft = f(k) for t = k⌧, one obtains formally as ⌧ ! 0: 8 t > 0, @ft @t = rJ(ft) and f0 = f(0)
  • 27. J(f) = R ||rf(x)||dxSobolev flow: @ft @t = ftHeat equation: Explicit solution: Gradient Flows and PDE’s f(k+1) f(k) ⌧ = rJ(f(k) )Fixed step size ⌧k = ⌧: Denote ft = f(k) for t = k⌧, one obtains formally as ⌧ ! 0: 8 t > 0, @ft @t = rJ(ft) and f0 = f(0)
  • 29. @ft @t = rJ(ft) Noisy observations: y = f + w, w ⇠ N(0, IdN ). and ft=0 = y Application: Denoising
  • 30. Optimal choice of t: minimize ||ft f|| ! not accessible in practice. SNR(ft, f) = 20 log10 ✓ ||f ft|| ||f|| ◆ Optimal Parameter Selection t t
  • 31. Overview • Variational Priors • Gradient Descent and PDE’s • Inverse Problems Regularization
  • 40. Sobolev prior: J(f) = 1 2 ||rf||2 f? = argmin f2RN E(f) = ||y f||2 + ||rf||2 (assuming 1 /2 ker( )) Sobolev Regularization
  • 41. Sobolev prior: J(f) = 1 2 ||rf||2 f? = argmin f2RN E(f) = ||y f||2 + ||rf||2 rE(f? ) = 0 () ( ⇤ )f? = ⇤ yProposition: ! Large scale linear system. (assuming 1 /2 ker( )) Sobolev Regularization
  • 42. Sobolev prior: J(f) = 1 2 ||rf||2 f? = argmin f2RN E(f) = ||y f||2 + ||rf||2 rE(f? ) = 0 () ( ⇤ )f? = ⇤ yProposition: ! Large scale linear system. Gradient descent: (assuming 1 /2 ker( )) where ||A|| = max(A) ! Slow convergence. Sobolev Regularization Convergence: ⇥ < 2/||⇥ ⇥ ||
  • 43. Mask M, = diagi(1i2M ) Example: InpaintingFigure 3 shows iterations of the algorithm 1 to solve the inpainting problem on a smooth image using a manifold prior with 2D linear patches, as defined in 16. This manifold together with the overlapping of the patches allow a smooth interpolation of the missing pixels. Measurements y Iter. #1 Iter. #3 Iter. #50 Fig. 3. Iterations of the inpainting algorithm on an uniformly regular image. 5 Manifold of Step Discontinuities In order to introduce some non-linearity in the manifold M, one needs to go log10(||f(k) f( ) ||/||f0||) k k E(f(k) ) M ( f)[i] = ⇢ 0 if i 2 M, f[i] otherwise.
  • 44. Symmetric linear system: Conjugate Gradient Ax = b () min x2Rn E(x) = 1 2 hAx, xi hx, bi
  • 45. Symmetric linear system: x(k+1) = argmin E(x) s.t. x x(k) 2 span(rE(x(0) ), . . . , rE(x(k) )) Intuition: Conjugate Gradient Ax = b () min x2Rn E(x) = 1 2 hAx, xi hx, bi Proposition: 8 ` < k, hrE(xk ), rE(x` )i = 0
  • 46. Symmetric linear system: Initialization: x(0) 2 RN , r(0) = b Ax(0) , p(0) = r(0) r(k) = hrE(x(k) ), d(k) i hAd(k), d(k)i d(k) = rE(x(k) ) + ||v(k) || ||v(k 1)|| d(k 1) v(k) = rE(x(k) ) = Ax(k) b x(k+1) = x(k) r(k) d(k) Iterations: x(k+1) = argmin E(x) s.t. x x(k) 2 span(rE(x(0) ), . . . , rE(x(k) )) Intuition: Conjugate Gradient Ax = b () min x2Rn E(x) = 1 2 hAx, xi hx, bi Proposition: 8 ` < k, hrE(xk ), rE(x` )i = 0
  • 47. TV" regularization: (assuming 1 /2 ker( )) f? = argmin f2RN E(f) = 1 2 || f y|| + J" (f) Total Variation Regularization ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"
  • 48. TV" regularization: (assuming 1 /2 ker( )) f(k+1) = f(k) ⌧krE(f(k) ) rE(f) = ⇤ ( f y) + rJ"(f) rJ"(f) = div ✓ rf ||rf||" ◆ Convergence: requires ⌧ ⇠ ". Gradient descent: f? = argmin f2RN E(f) = 1 2 || f y|| + J" (f) Total Variation Regularization ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"
  • 49. TV" regularization: (assuming 1 /2 ker( )) f(k+1) = f(k) ⌧krE(f(k) ) rE(f) = ⇤ ( f y) + rJ"(f) rJ"(f) = div ✓ rf ||rf||" ◆ Convergence: requires ⌧ ⇠ ". Gradient descent: f? = argmin f2RN E(f) = 1 2 || f y|| + J" (f) Newton descent: f(k+1) = f(k) H 1 k rE(f(k) ) where Hk = @2 E"(f(k) ) Total Variation Regularization ||u||" = p ||u||2 + "2 J"(f) = P n ||rf[n]||"
  • 50. k Large" TV vs. Sobolev ConvergeSmall"
  • 51. Observations y Sobolev Total variation Inpainting: Sobolev vs. TV
  • 52. Noiseless problem: f? 2 argmin f J" (f) s.t. f 2 H Contraint: H = {f ; f = y}. f(k+1) = ProjH ⇣ f(k) ⌧krJ"(f(k) ) ⌘ ProjH(f) = argmin g=y ||g f||2 = f + ⇤ ( ⇤ ) 1 (y f) Inpainting: ProjH(f)[i] = ⇢ f[i] if i 2 M, y[i] otherwise. Projected gradient descent: f(k) k!+1 ! f? a solution of (?). (?) Projected Gradient Descent Proposition: If rJ" is L-Lipschitz and 0 < ⌧k < 2/L,
  • 53. TV Priors: Non-quadratic better edge recovery. =) Conclusion Sobolev
  • 54. TV Priors: Non-quadratic better edge recovery. =) Optimization Variational regularization: () – Gradient descent. – Newton. – Projected gradient. – Conjugate gradient. Non-smooth optimization ? Conclusion Sobolev