The document discusses mesh generation, which involves decomposing a domain into simple elements like triangles or tetrahedra. An optimal mesh has good element quality, conforms to the input domain, and uses the minimum number of points needed to make all Voronoi cells sufficiently "fat" or well-shaped according to metrics like radius-edge ratios. The talk presents analysis showing that the optimal mesh size is determined by the "feature size measure" of the input points, which involves the distance to each point's second nearest neighbor.
52. Local Refinement Algorithms
Pros:
Easy to implement
Often Parallel
Cons:
Termination? Yes.
Accumulations? No.
How many points?
This is what we’ll answer.
53. The size of an optimal mesh is given by
the feature size measure.
54. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
55. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
56. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
57. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
x
58. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
s( x)
x lf
59. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
x
60. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
x lfs(
x)
61. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
dx
Optimal Mesh Size = Θ Ω lfs(x)d
x lfs(
x)
62. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
number of vertices
dx
Optimal Mesh Size = Θ Ω lfs(x)d
x lfs(
x)
63. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
number of vertices
dx
Optimal Mesh Size = Θ Ω lfs(x)d
hides simple exponential in d
x lfs(
x)
64. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
number of vertices
dx
Optimal Mesh Size = Θ Ω lfs(x)d
hides simple exponential in d
dx
The Feature Size Measure: µP (Ω) = lfsP (x)d
Ω
x lfs(
x)
65. The size of an optimal mesh is given by
the feature size measure.
lfsP (x) := Distance to second nearest neighbor in P .
number of vertices
dx
Optimal Mesh Size = Θ Ω lfs(x)d
hides simple exponential in d
dx
The Feature Size Measure: µP (Ω) = lfsP (x)d
Ω
When is µP (Ω) = O(n)?
66. A canonical bad case for meshing is two
points in a big empty space.
67. A canonical bad case for meshing is two
points in a big empty space.
68. A canonical bad case for meshing is two
points in a big empty space.
69. A canonical bad case for meshing is two
points in a big empty space.
70. A canonical bad case for meshing is two
points in a big empty space.
71. A canonical bad case for meshing is two
points in a big empty space.
72. A canonical bad case for meshing is two
points in a big empty space.
73. A canonical bad case for meshing is two
points in a big empty space.
74. A canonical bad case for meshing is two
points in a big empty space.
75. A canonical bad case for meshing is two
points in a big empty space.
77. The feature size measure can be
bounded in terms of the pacing.
Order the points.
78. The feature size measure can be
bounded in terms of the pacing.
Order the points.
79. The feature size measure can be
bounded in terms of the pacing.
Order the points.
80. The feature size measure can be
bounded in terms of the pacing.
Order the points.
81. The feature size measure can be
bounded in terms of the pacing.
Order the points.
82. The feature size measure can be
bounded in terms of the pacing.
Order the points.
83. The feature size measure can be
bounded in terms of the pacing.
pi
Order the points.
84. The feature size measure can be
bounded in terms of the pacing.
pi
Order the points.
a = pi − NN(pi )
85. The feature size measure can be
bounded in terms of the pacing.
pi
Order the points.
a = pi − NN(pi )
b = pi − 2NN(pi )
86. The feature size measure can be
bounded in terms of the pacing.
pi
Order the points.
a = pi − NN(pi )
b = pi − 2NN(pi )
b
The pacing of the ith point is φi = a .
87. The feature size measure can be
bounded in terms of the pacing.
pi
Order the points.
a = pi − NN(pi )
b = pi − 2NN(pi )
b
The pacing of the ith point is φi = a .
Let φ be the geometric mean, so log φi = n log φ.
88. The feature size measure can be
bounded in terms of the pacing.
pi
Order the points.
a = pi − NN(pi )
b = pi − 2NN(pi )
b
The pacing of the ith point is φi = a .
Let φ be the geometric mean, so log φi = n log φ.
φ is the pacing of the ordering.
89. The trick is to write the feature size
measure as a telescoping sum.
90. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
91. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
n
µ P = µ P2 + µPi − µPi−1
i=3
92. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
n
µ P = µ P2 + µPi − µPi−1
i=3
effect of adding the ith point.
93. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
n
µ P = µ P2 + µPi − µPi−1
i=3
effect of adding the ith point.
µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
94. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
n
µ P = µ P2 + µPi − µPi−1
i=3
effect of adding the ith point.
µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
n
log φi = n log φ
i=3
95. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
n
µ P = µ P2 + µPi − µPi−1
i=3
effect of adding the ith point.
µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
n
log φi = n log φ Θ(n + n log φ)
i=3
96. The trick is to write the feature size
measure as a telescoping sum.
Pi = {p1 , . . . , pi }
n
µ P = µ P2 + µPi − µPi−1
i=3
effect of adding the ith point.
µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
n
log φi = n log φ Θ(n + n log φ)
i=3
d
Previous bound: O(n + φ n).
98. Pacing analysis has already led to
new results.
The Scaffold Theorem (SODA 2009)
Given n points well-spaced on
a surface, the volume mesh
has size O(n).
99. Pacing analysis has already led to
new results.
The Scaffold Theorem (SODA 2009)
Given n points well-spaced on
a surface, the volume mesh
has size O(n).
Time-Optimal Point Meshing (SoCG 2011)
Build a mesh in O(n log n + m) time.
Algorithm explicitly computes the pacing for each insertion.
101. Some takeaway messages:
1 The amortized change in the number of vertices in a mesh
as a result of adding one new point is determined by the
pacing of that point.
102. Some takeaway messages:
1 The amortized change in the number of vertices in a mesh
as a result of adding one new point is determined by the
pacing of that point.
2 Point sets that admit linear size meshes are exactly those
with constant pacing.
103. Some takeaway messages:
1 The amortized change in the number of vertices in a mesh
as a result of adding one new point is determined by the
pacing of that point.
2 Point sets that admit linear size meshes are exactly those
with constant pacing.
Thank you.
107. Mesh Generation 13
Decompose a domain
into simple elements.
108. Mesh Generation 13
Decompose a domain
into simple elements.
109. Mesh Generation 13
Decompose a domain
Mesh Quality
into simple elements.
Radius/Edge < const
110. Mesh Generation 13
Decompose a domain
Mesh Quality
into simple elements.
X ✓ X
Radius/Edge < const
111. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
112. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
113. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
114. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
115. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
Voronoi Diagram
116. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
Voronoi Diagram OutRadius/InRadius < const
117. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
X ✓
Voronoi Diagram OutRadius/InRadius < const
118. Mesh Generation 13
Decompose a domain
Mesh Quality Conforming to Input
into simple elements.
X ✓ X
Radius/Edge < const
X ✓
Voronoi Diagram OutRadius/InRadius < const
119. Optimal meshing adds the fewest points
to make all Voronoi cells fat.*
* Equivalent to radius-edge condition on Delaunay simplices.
120. Optimal meshing adds the fewest points
to make all Voronoi cells fat.*
* Equivalent to radius-edge condition on Delaunay simplices.
121. Optimal meshing adds the fewest points
to make all Voronoi cells fat.*
* Equivalent to radius-edge condition on Delaunay simplices.
122. Optimal meshing adds the fewest points
to make all Voronoi cells fat.*
* Equivalent to radius-edge condition on Delaunay simplices.
123. Meshing Points 15
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
124. Meshing Points 15
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
125. Meshing Points 15
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
126. Meshing Points 15
Input: P ⊂ Rd
Output: M ⊃ P with a “nice” Voronoi diagram
n = |P |, m = |M |
128. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
129. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
130. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
x
131. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
x
fP (x)
132. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
x
133. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
( x)
x fP
134. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
135. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
dx
For all v ∈ M, fM (v) ≥ KfP (v) m=Θ
Ω fP (x)d
136. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
dx
For all v ∈ M, fM (v) ≥ KfP (v) m=Θ
Ω fP (x)d
“No 2 points too close together” “Optimal Size Output”
137. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
dx
For all v ∈ M, fM (v) ≥ KfP (v) m=Θ
Ω fP (x)d
“No 2 points too close together” “Optimal Size Output”
v
138. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
dx
For all v ∈ M, fM (v) ≥ KfP (v) m=Θ
Ω fP (x)d
“No 2 points too close together” “Optimal Size Output”
v
fM (v)
139. How to prove a meshing algorithm is optimal.16
The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
dx
For all v ∈ M, fM (v) ≥ KfP (v) m=Θ
Ω fP (x)d
“No 2 points too close together” “Optimal Size Output”
v
( v) fM (v)
fP