SlideShare una empresa de Scribd logo
1 de 139
Descargar para leer sin conexión
New Bounds
        on the
Size of Optimal Meshes

       Don Sheehy

        Geometrica
          INRIA
Mesh Generation
Mesh Generation

1  Decompose a volume
into simplices.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.

3 Output should
conform to input.
Mesh Generation

1  Decompose a volume
into simplices.

2 Simplices should be
quality.

3 Output should
conform to input.
Mesh Generation
Mesh Generation

Uses:
Mesh Generation

Uses:
   PDEs via FEM
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen

Theoretical Guarantees:
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen

Theoretical Guarantees:
   Sliver Removal
Mesh Generation

Uses:
   PDEs via FEM
   Data Analysis

Good Codes:
   Triangle
   CGAL
   TetGen

Theoretical Guarantees:
   Sliver Removal
   Surface Reconstruction
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms
Local Refinement Algorithms

Pros:
Local Refinement Algorithms

Pros:
  Easy to implement
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
 How many points?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?
 Accumulations?
 How many points?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?   Yes.
 Accumulations?
 How many points?
Local Refinement Algorithms

Pros:
  Easy to implement
  Often Parallel

Cons:
 Termination?   Yes.
 Accumulations? No.
 How many points?
Local Refinement Algorithms

 Pros:
   Easy to implement
   Often Parallel

 Cons:
  Termination?   Yes.
  Accumulations? No.
  How many points?



This is what we’ll answer.
The size of an optimal mesh is given by
       the feature size measure.
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




                                     x
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




                                            s( x)
                                     x lf
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




            x
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .

                                              dx
    Optimal Mesh Size = Θ                Ω lfs(x)d




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
                           number of vertices
                                                dx
    Optimal Mesh Size = Θ                  Ω lfs(x)d




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
  lfsP (x) := Distance to second nearest neighbor in P .
                                 number of vertices
                                                      dx
    Optimal Mesh Size = Θ                        Ω lfs(x)d
         hides simple exponential in d




            x lfs(
                   x)
The size of an optimal mesh is given by
       the feature size measure.
   lfsP (x) := Distance to second nearest neighbor in P .
                                  number of vertices
                                                       dx
     Optimal Mesh Size = Θ                        Ω lfs(x)d
          hides simple exponential in d
                                                              dx
 The Feature Size Measure: µP (Ω) =                        lfsP (x)d
                                                       Ω




             x lfs(
                    x)
The size of an optimal mesh is given by
       the feature size measure.
   lfsP (x) := Distance to second nearest neighbor in P .
                                  number of vertices
                                                       dx
     Optimal Mesh Size = Θ                        Ω lfs(x)d
          hides simple exponential in d
                                                              dx
 The Feature Size Measure: µP (Ω) =                        lfsP (x)d
                                                       Ω




      When is µP (Ω) = O(n)?
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
A canonical bad case for meshing is two
     points in a big empty space.
The feature size measure can be
bounded in terms of the pacing.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
Order the points.
The feature size measure can be
bounded in terms of the pacing.
                    pi
Order the points.
The feature size measure can be
bounded in terms of the pacing.
                    pi
Order the points.
                         a = pi − NN(pi )
The feature size measure can be
bounded in terms of the pacing.
                    pi
Order the points.
                         a = pi − NN(pi )


                    b = pi − 2NN(pi )
The feature size measure can be
bounded in terms of the pacing.
                     pi
 Order the points.
                          a = pi − NN(pi )


                     b = pi − 2NN(pi )



                                         b
The pacing of the ith point is φi =      a .
The feature size measure can be
   bounded in terms of the pacing.
                          pi
   Order the points.
                               a = pi − NN(pi )


                         b = pi − 2NN(pi )



                                              b
  The pacing of the ith point is φi =         a .
Let φ be the geometric mean, so     log φi = n log φ.
The feature size measure can be
   bounded in terms of the pacing.
                           pi
   Order the points.
                                a = pi − NN(pi )


                          b = pi − 2NN(pi )



                                               b
  The pacing of the ith point is φi =          a .
Let φ be the geometric mean, so      log φi = n log φ.

          φ is the pacing of the ordering.
The trick is to write the feature size
  measure as a telescoping sum.
The trick is to write the feature size
  measure as a telescoping sum.
           Pi = {p1 , . . . , pi }
The trick is to write the feature size
  measure as a telescoping sum.
            Pi = {p1 , . . . , pi }
                        n
        µ P = µ P2 +         µPi − µPi−1
                       i=3
The trick is to write the feature size
  measure as a telescoping sum.
            Pi = {p1 , . . . , pi }
                        n
        µ P = µ P2 +         µPi − µPi−1
                       i=3

                             effect of adding the ith point.
The trick is to write the feature size
  measure as a telescoping sum.
            Pi = {p1 , . . . , pi }
                        n
        µ P = µ P2 +         µPi − µPi−1
                       i=3

                             effect of adding the ith point.


        µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
The trick is to write the feature size
  measure as a telescoping sum.
                 Pi = {p1 , . . . , pi }
                             n
             µ P = µ P2 +         µPi − µPi−1
                            i=3

                                  effect of adding the ith point.


             µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )

   n
         log φi = n log φ
   i=3
The trick is to write the feature size
  measure as a telescoping sum.
                 Pi = {p1 , . . . , pi }
                             n
             µ P = µ P2 +         µPi − µPi−1
                            i=3

                                  effect of adding the ith point.


             µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )

   n
         log φi = n log φ          Θ(n + n log φ)
   i=3
The trick is to write the feature size
  measure as a telescoping sum.
                 Pi = {p1 , . . . , pi }
                             n
             µ P = µ P2 +          µPi − µPi−1
                            i=3

                                  effect of adding the ith point.


             µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )

   n
         log φi = n log φ          Θ(n + n log φ)
   i=3

                                                                    d
                                  Previous bound: O(n + φ n).
Pacing analysis has already led to
          new results.
Pacing analysis has already led to
             new results.

The Scaffold Theorem (SODA 2009)
   Given n points well-spaced on
   a surface, the volume mesh
   has size O(n).
Pacing analysis has already led to
             new results.

The Scaffold Theorem (SODA 2009)
   Given n points well-spaced on
   a surface, the volume mesh
   has size O(n).

Time-Optimal Point Meshing (SoCG 2011)
   Build a mesh in O(n log n + m) time.
   Algorithm explicitly computes the pacing for each insertion.
Some takeaway messages:
Some takeaway messages:

   1  The amortized change in the number of vertices in a mesh
   as a result of adding one new point is determined by the
   pacing of that point.
Some takeaway messages:

   1  The amortized change in the number of vertices in a mesh
   as a result of adding one new point is determined by the
   pacing of that point.

   2  Point sets that admit linear size meshes are exactly those
   with constant pacing.
Some takeaway messages:

   1  The amortized change in the number of vertices in a mesh
   as a result of adding one new point is determined by the
   pacing of that point.

   2  Point sets that admit linear size meshes are exactly those
   with constant pacing.



                      Thank you.
Mesh Generation   13
Mesh Generation        13
Decompose a domain
into simple elements.
Mesh Generation        13
Decompose a domain
into simple elements.
Mesh Generation                              13
Decompose a domain
                          Mesh Quality
into simple elements.




                        Radius/Edge < const
Mesh Generation                               13
Decompose a domain
                            Mesh Quality
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const
Mesh Generation                                              13
Decompose a domain
                            Mesh Quality       Conforming to Input
into simple elements.




                        X       ✓          X
                        Radius/Edge < const




   Voronoi Diagram
Mesh Generation                                                    13
Decompose a domain
                              Mesh Quality           Conforming to Input
into simple elements.




                          X        ✓         X
                           Radius/Edge < const




   Voronoi Diagram      OutRadius/InRadius < const
Mesh Generation                                                    13
Decompose a domain
                              Mesh Quality           Conforming to Input
into simple elements.




                          X        ✓         X
                           Radius/Edge < const




                              X          ✓
   Voronoi Diagram      OutRadius/InRadius < const
Mesh Generation                                                    13
Decompose a domain
                              Mesh Quality           Conforming to Input
into simple elements.




                          X        ✓         X
                           Radius/Edge < const




                              X          ✓
   Voronoi Diagram      OutRadius/InRadius < const
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Optimal meshing adds the fewest points
    to make all Voronoi cells fat.*




  * Equivalent to radius-edge condition on Delaunay simplices.
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
Meshing Points                                   15
   Input: P ⊂ Rd
   Output: M ⊃ P with a “nice” Voronoi diagram
   n = |P |, m = |M |
How to prove a meshing algorithm is optimal.16
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




              x
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




              x
                  fP (x)
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




                                          x
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P




                                                 ( x)
                                          x fP
How to prove a meshing algorithm is optimal.16
  The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)           m=Θ
                                                       Ω   fP (x)d
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)           m=Θ
                                                       Ω   fP (x)d

 “No 2 points too close together”           “Optimal Size Output”
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)           m=Θ
                                                       Ω   fP (x)d

 “No 2 points too close together”           “Optimal Size Output”




                                    v
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                             dx
  For all v ∈ M, fM (v) ≥ KfP (v)            m=Θ
                                                       Ω   fP (x)d

 “No 2 points too close together”            “Optimal Size Output”




                                    v
                                    fM (v)
How to prove a meshing algorithm is optimal.16
   The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P

                                                                  dx
  For all v ∈ M, fM (v) ≥ KfP (v)                 m=Θ
                                                            Ω   fP (x)d

 “No 2 points too close together”                 “Optimal Size Output”




                                         v
                                  ( v)   fM (v)
                             fP

Más contenido relacionado

La actualidad más candente

A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)Jia-Bin Huang
 
Effective Numerical Computation in NumPy and SciPy
Effective Numerical Computation in NumPy and SciPyEffective Numerical Computation in NumPy and SciPy
Effective Numerical Computation in NumPy and SciPyKimikazu Kato
 
Mas college5 2011.key
Mas college5 2011.keyMas college5 2011.key
Mas college5 2011.keyeosinophil_g
 
Fuzzieee-98-final
Fuzzieee-98-finalFuzzieee-98-final
Fuzzieee-98-finalSumit Sen
 
Maximizing Submodular Function over the Integer Lattice
Maximizing Submodular Function over the Integer LatticeMaximizing Submodular Function over the Integer Lattice
Maximizing Submodular Function over the Integer LatticeTasuku Soma
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationVarun Ojha
 
Seven waystouseturtle pycon2009
Seven waystouseturtle pycon2009Seven waystouseturtle pycon2009
Seven waystouseturtle pycon2009A Jorge Garcia
 
584 fundamental theorem of calculus
584 fundamental theorem of calculus584 fundamental theorem of calculus
584 fundamental theorem of calculusgoldenratio618
 
DCC2014 - Fully Online Grammar Compression in Constant Space
DCC2014 - Fully Online Grammar Compression in Constant SpaceDCC2014 - Fully Online Grammar Compression in Constant Space
DCC2014 - Fully Online Grammar Compression in Constant SpaceYasuo Tabei
 
Single Sample Soft Shadows Using Depth Maps
Single Sample Soft Shadows Using Depth MapsSingle Sample Soft Shadows Using Depth Maps
Single Sample Soft Shadows Using Depth Mapsstefan_b
 
Statistical Analysis of Neural Coding
Statistical Analysis of Neural CodingStatistical Analysis of Neural Coding
Statistical Analysis of Neural CodingYifei Shea, Ph.D.
 
Unexpected Default in an Information based model
Unexpected Default in an Information based modelUnexpected Default in an Information based model
Unexpected Default in an Information based modelMatteo Bedini
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編Ryota Kamoshida
 
Ning_Mei.ASSIGN02
Ning_Mei.ASSIGN02Ning_Mei.ASSIGN02
Ning_Mei.ASSIGN02宁 梅
 
2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy
2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy
2013-1 Machine Learning Lecture 02 - Andrew Moore: EntropyDongseo University
 
Art%3 a10.1155%2fs1110865704401036
Art%3 a10.1155%2fs1110865704401036Art%3 a10.1155%2fs1110865704401036
Art%3 a10.1155%2fs1110865704401036SANDRA BALSECA
 

La actualidad más candente (20)

A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
 
Effective Numerical Computation in NumPy and SciPy
Effective Numerical Computation in NumPy and SciPyEffective Numerical Computation in NumPy and SciPy
Effective Numerical Computation in NumPy and SciPy
 
Mas college5 2011.key
Mas college5 2011.keyMas college5 2011.key
Mas college5 2011.key
 
Fuzzieee-98-final
Fuzzieee-98-finalFuzzieee-98-final
Fuzzieee-98-final
 
Maximizing Submodular Function over the Integer Lattice
Maximizing Submodular Function over the Integer LatticeMaximizing Submodular Function over the Integer Lattice
Maximizing Submodular Function over the Integer Lattice
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier Transformation
 
Seven waystouseturtle pycon2009
Seven waystouseturtle pycon2009Seven waystouseturtle pycon2009
Seven waystouseturtle pycon2009
 
Masters Report 3
Masters Report 3Masters Report 3
Masters Report 3
 
584 fundamental theorem of calculus
584 fundamental theorem of calculus584 fundamental theorem of calculus
584 fundamental theorem of calculus
 
DCC2014 - Fully Online Grammar Compression in Constant Space
DCC2014 - Fully Online Grammar Compression in Constant SpaceDCC2014 - Fully Online Grammar Compression in Constant Space
DCC2014 - Fully Online Grammar Compression in Constant Space
 
Single Sample Soft Shadows Using Depth Maps
Single Sample Soft Shadows Using Depth MapsSingle Sample Soft Shadows Using Depth Maps
Single Sample Soft Shadows Using Depth Maps
 
Statistical Analysis of Neural Coding
Statistical Analysis of Neural CodingStatistical Analysis of Neural Coding
Statistical Analysis of Neural Coding
 
Unexpected Default in an Information based model
Unexpected Default in an Information based modelUnexpected Default in an Information based model
Unexpected Default in an Information based model
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編
 
Python faster for loop
Python faster for loopPython faster for loop
Python faster for loop
 
Ning_Mei.ASSIGN02
Ning_Mei.ASSIGN02Ning_Mei.ASSIGN02
Ning_Mei.ASSIGN02
 
2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy
2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy
2013-1 Machine Learning Lecture 02 - Andrew Moore: Entropy
 
FK_icassp_2014
FK_icassp_2014FK_icassp_2014
FK_icassp_2014
 
Radio ad blocker
Radio ad blockerRadio ad blocker
Radio ad blocker
 
Art%3 a10.1155%2fs1110865704401036
Art%3 a10.1155%2fs1110865704401036Art%3 a10.1155%2fs1110865704401036
Art%3 a10.1155%2fs1110865704401036
 

Destacado

SOCG: Linear-Size Approximations to the Vietoris-Rips Filtration
SOCG: Linear-Size Approximations to the Vietoris-Rips FiltrationSOCG: Linear-Size Approximations to the Vietoris-Rips Filtration
SOCG: Linear-Size Approximations to the Vietoris-Rips FiltrationDon Sheehy
 
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations Don Sheehy
 
Overlay Stitch Meshing
Overlay Stitch MeshingOverlay Stitch Meshing
Overlay Stitch MeshingDon Sheehy
 
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Don Sheehy
 
Topological Inference via Meshing
Topological Inference via MeshingTopological Inference via Meshing
Topological Inference via MeshingDon Sheehy
 
Characterizing the Distortion of Some Simple Euclidean Embeddings
Characterizing the Distortion of Some Simple Euclidean EmbeddingsCharacterizing the Distortion of Some Simple Euclidean Embeddings
Characterizing the Distortion of Some Simple Euclidean EmbeddingsDon Sheehy
 
Linear Size Meshes
Linear Size MeshesLinear Size Meshes
Linear Size MeshesDon Sheehy
 
Geometry, Topology, and all of Your Wildest Dreams Will Come True
Geometry, Topology, and all of Your Wildest Dreams Will Come TrueGeometry, Topology, and all of Your Wildest Dreams Will Come True
Geometry, Topology, and all of Your Wildest Dreams Will Come TrueDon Sheehy
 
Planar Graphis in 2.5 Dimensions
Planar Graphis in 2.5 DimensionsPlanar Graphis in 2.5 Dimensions
Planar Graphis in 2.5 DimensionsDon Sheehy
 
ATMCS: Linear-Size Approximations to the Vietoris-Rips Filtration
ATMCS: Linear-Size Approximations to the Vietoris-Rips FiltrationATMCS: Linear-Size Approximations to the Vietoris-Rips Filtration
ATMCS: Linear-Size Approximations to the Vietoris-Rips FiltrationDon Sheehy
 
Beating the Spread: Time-Optimal Point Meshing
Beating the Spread: Time-Optimal Point MeshingBeating the Spread: Time-Optimal Point Meshing
Beating the Spread: Time-Optimal Point MeshingDon Sheehy
 
Centerpoints Theory Lunch Talk
Centerpoints Theory Lunch TalkCenterpoints Theory Lunch Talk
Centerpoints Theory Lunch TalkDon Sheehy
 
On Nets and Meshes
On Nets and MeshesOn Nets and Meshes
On Nets and MeshesDon Sheehy
 
Mesh Generation and Topological Data Analysis
Mesh Generation and Topological Data AnalysisMesh Generation and Topological Data Analysis
Mesh Generation and Topological Data AnalysisDon Sheehy
 
Minimax Rates for Homology Inference
Minimax Rates for Homology InferenceMinimax Rates for Homology Inference
Minimax Rates for Homology InferenceDon Sheehy
 
Ball Packings and Fat Voronoi Diagrams
Ball Packings and Fat Voronoi DiagramsBall Packings and Fat Voronoi Diagrams
Ball Packings and Fat Voronoi DiagramsDon Sheehy
 

Destacado (16)

SOCG: Linear-Size Approximations to the Vietoris-Rips Filtration
SOCG: Linear-Size Approximations to the Vietoris-Rips FiltrationSOCG: Linear-Size Approximations to the Vietoris-Rips Filtration
SOCG: Linear-Size Approximations to the Vietoris-Rips Filtration
 
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
 
Overlay Stitch Meshing
Overlay Stitch MeshingOverlay Stitch Meshing
Overlay Stitch Meshing
 
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...
 
Topological Inference via Meshing
Topological Inference via MeshingTopological Inference via Meshing
Topological Inference via Meshing
 
Characterizing the Distortion of Some Simple Euclidean Embeddings
Characterizing the Distortion of Some Simple Euclidean EmbeddingsCharacterizing the Distortion of Some Simple Euclidean Embeddings
Characterizing the Distortion of Some Simple Euclidean Embeddings
 
Linear Size Meshes
Linear Size MeshesLinear Size Meshes
Linear Size Meshes
 
Geometry, Topology, and all of Your Wildest Dreams Will Come True
Geometry, Topology, and all of Your Wildest Dreams Will Come TrueGeometry, Topology, and all of Your Wildest Dreams Will Come True
Geometry, Topology, and all of Your Wildest Dreams Will Come True
 
Planar Graphis in 2.5 Dimensions
Planar Graphis in 2.5 DimensionsPlanar Graphis in 2.5 Dimensions
Planar Graphis in 2.5 Dimensions
 
ATMCS: Linear-Size Approximations to the Vietoris-Rips Filtration
ATMCS: Linear-Size Approximations to the Vietoris-Rips FiltrationATMCS: Linear-Size Approximations to the Vietoris-Rips Filtration
ATMCS: Linear-Size Approximations to the Vietoris-Rips Filtration
 
Beating the Spread: Time-Optimal Point Meshing
Beating the Spread: Time-Optimal Point MeshingBeating the Spread: Time-Optimal Point Meshing
Beating the Spread: Time-Optimal Point Meshing
 
Centerpoints Theory Lunch Talk
Centerpoints Theory Lunch TalkCenterpoints Theory Lunch Talk
Centerpoints Theory Lunch Talk
 
On Nets and Meshes
On Nets and MeshesOn Nets and Meshes
On Nets and Meshes
 
Mesh Generation and Topological Data Analysis
Mesh Generation and Topological Data AnalysisMesh Generation and Topological Data Analysis
Mesh Generation and Topological Data Analysis
 
Minimax Rates for Homology Inference
Minimax Rates for Homology InferenceMinimax Rates for Homology Inference
Minimax Rates for Homology Inference
 
Ball Packings and Fat Voronoi Diagrams
Ball Packings and Fat Voronoi DiagramsBall Packings and Fat Voronoi Diagrams
Ball Packings and Fat Voronoi Diagrams
 

Similar a New Bounds on the Size of Optimal Meshes

Topological Inference via Meshing
Topological Inference via MeshingTopological Inference via Meshing
Topological Inference via MeshingDon Sheehy
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video searchzukun
 
Some Thoughts on Sampling
Some Thoughts on SamplingSome Thoughts on Sampling
Some Thoughts on SamplingDon Sheehy
 
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Christian Robert
 
Polynomial functions modelllings
Polynomial functions modelllingsPolynomial functions modelllings
Polynomial functions modelllingsTarun Gehlot
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagationDong Guo
 
Programmable PN Sequence Generators
Programmable PN Sequence GeneratorsProgrammable PN Sequence Generators
Programmable PN Sequence GeneratorsRajesh Singh
 
Bachelor_Defense
Bachelor_DefenseBachelor_Defense
Bachelor_DefenseTeja Turk
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimizationDelta Pi Systems
 
Lecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to InterpolationLecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to InterpolationEric Cochran
 
15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf
15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf
15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdfMcSwathi
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function InterpolationJesse Bettencourt
 
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9tingyuansenastro
 

Similar a New Bounds on the Size of Optimal Meshes (20)

Topological Inference via Meshing
Topological Inference via MeshingTopological Inference via Meshing
Topological Inference via Meshing
 
Lecture 02 internet video search
Lecture 02 internet video searchLecture 02 internet video search
Lecture 02 internet video search
 
Some Thoughts on Sampling
Some Thoughts on SamplingSome Thoughts on Sampling
Some Thoughts on Sampling
 
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
 
Polynomial functions modelllings
Polynomial functions modelllingsPolynomial functions modelllings
Polynomial functions modelllings
 
Es272 ch5b
Es272 ch5bEs272 ch5b
Es272 ch5b
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagation
 
Programmable PN Sequence Generators
Programmable PN Sequence GeneratorsProgrammable PN Sequence Generators
Programmable PN Sequence Generators
 
CRMS Calculus 2010 April 13, 2010
CRMS Calculus 2010 April 13, 2010CRMS Calculus 2010 April 13, 2010
CRMS Calculus 2010 April 13, 2010
 
Astaño 4
Astaño 4Astaño 4
Astaño 4
 
Statistical Physics Assignment Help
Statistical Physics Assignment HelpStatistical Physics Assignment Help
Statistical Physics Assignment Help
 
09Evaluation_Clustering.pdf
09Evaluation_Clustering.pdf09Evaluation_Clustering.pdf
09Evaluation_Clustering.pdf
 
1519 differentiation-integration-02
1519 differentiation-integration-021519 differentiation-integration-02
1519 differentiation-integration-02
 
Bachelor_Defense
Bachelor_DefenseBachelor_Defense
Bachelor_Defense
 
Optimization tutorial
Optimization tutorialOptimization tutorial
Optimization tutorial
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
 
Lecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to InterpolationLecture 3 - Introduction to Interpolation
Lecture 3 - Introduction to Interpolation
 
15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf
15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf
15_wk4_unsupervised-learning_manifold-EM-cs365-2014.pdf
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function Interpolation
 
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
 

Más de Don Sheehy

Sensors and Samples: A Homological Approach
Sensors and Samples:  A Homological ApproachSensors and Samples:  A Homological Approach
Sensors and Samples: A Homological ApproachDon Sheehy
 
Persistent Homology and Nested Dissection
Persistent Homology and Nested DissectionPersistent Homology and Nested Dissection
Persistent Homology and Nested DissectionDon Sheehy
 
The Persistent Homology of Distance Functions under Random Projection
The Persistent Homology of Distance Functions under Random ProjectionThe Persistent Homology of Distance Functions under Random Projection
The Persistent Homology of Distance Functions under Random ProjectionDon Sheehy
 
Geometric and Topological Data Analysis
Geometric and Topological Data AnalysisGeometric and Topological Data Analysis
Geometric and Topological Data AnalysisDon Sheehy
 
Geometric Separators and the Parabolic Lift
Geometric Separators and the Parabolic LiftGeometric Separators and the Parabolic Lift
Geometric Separators and the Parabolic LiftDon Sheehy
 
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay TriangulationsA New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay TriangulationsDon Sheehy
 
A Multicover Nerve for Geometric Inference
A Multicover Nerve for Geometric InferenceA Multicover Nerve for Geometric Inference
A Multicover Nerve for Geometric InferenceDon Sheehy
 
Flips in Computational Geometry
Flips in Computational GeometryFlips in Computational Geometry
Flips in Computational GeometryDon Sheehy
 
Learning with Nets and Meshes
Learning with Nets and MeshesLearning with Nets and Meshes
Learning with Nets and MeshesDon Sheehy
 
Achieving Spatial Adaptivity while Searching for Approximate Nearest Neighbors
Achieving Spatial Adaptivity while Searching for Approximate Nearest NeighborsAchieving Spatial Adaptivity while Searching for Approximate Nearest Neighbors
Achieving Spatial Adaptivity while Searching for Approximate Nearest NeighborsDon Sheehy
 
The Centervertex Theorem (CCCG)
The Centervertex Theorem (CCCG)The Centervertex Theorem (CCCG)
The Centervertex Theorem (CCCG)Don Sheehy
 
The Centervertex Theorem (FWCG)
The Centervertex Theorem (FWCG)The Centervertex Theorem (FWCG)
The Centervertex Theorem (FWCG)Don Sheehy
 

Más de Don Sheehy (12)

Sensors and Samples: A Homological Approach
Sensors and Samples:  A Homological ApproachSensors and Samples:  A Homological Approach
Sensors and Samples: A Homological Approach
 
Persistent Homology and Nested Dissection
Persistent Homology and Nested DissectionPersistent Homology and Nested Dissection
Persistent Homology and Nested Dissection
 
The Persistent Homology of Distance Functions under Random Projection
The Persistent Homology of Distance Functions under Random ProjectionThe Persistent Homology of Distance Functions under Random Projection
The Persistent Homology of Distance Functions under Random Projection
 
Geometric and Topological Data Analysis
Geometric and Topological Data AnalysisGeometric and Topological Data Analysis
Geometric and Topological Data Analysis
 
Geometric Separators and the Parabolic Lift
Geometric Separators and the Parabolic LiftGeometric Separators and the Parabolic Lift
Geometric Separators and the Parabolic Lift
 
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay TriangulationsA New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
 
A Multicover Nerve for Geometric Inference
A Multicover Nerve for Geometric InferenceA Multicover Nerve for Geometric Inference
A Multicover Nerve for Geometric Inference
 
Flips in Computational Geometry
Flips in Computational GeometryFlips in Computational Geometry
Flips in Computational Geometry
 
Learning with Nets and Meshes
Learning with Nets and MeshesLearning with Nets and Meshes
Learning with Nets and Meshes
 
Achieving Spatial Adaptivity while Searching for Approximate Nearest Neighbors
Achieving Spatial Adaptivity while Searching for Approximate Nearest NeighborsAchieving Spatial Adaptivity while Searching for Approximate Nearest Neighbors
Achieving Spatial Adaptivity while Searching for Approximate Nearest Neighbors
 
The Centervertex Theorem (CCCG)
The Centervertex Theorem (CCCG)The Centervertex Theorem (CCCG)
The Centervertex Theorem (CCCG)
 
The Centervertex Theorem (FWCG)
The Centervertex Theorem (FWCG)The Centervertex Theorem (FWCG)
The Centervertex Theorem (FWCG)
 

New Bounds on the Size of Optimal Meshes

  • 1. New Bounds on the Size of Optimal Meshes Don Sheehy Geometrica INRIA
  • 3. Mesh Generation 1 Decompose a volume into simplices.
  • 4. Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality.
  • 5. Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality.
  • 6. Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality. 3 Output should conform to input.
  • 7. Mesh Generation 1 Decompose a volume into simplices. 2 Simplices should be quality. 3 Output should conform to input.
  • 10. Mesh Generation Uses: PDEs via FEM
  • 11. Mesh Generation Uses: PDEs via FEM Data Analysis
  • 12. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes:
  • 13. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle
  • 14. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL
  • 15. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen
  • 16. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen Theoretical Guarantees:
  • 17. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen Theoretical Guarantees: Sliver Removal
  • 18. Mesh Generation Uses: PDEs via FEM Data Analysis Good Codes: Triangle CGAL TetGen Theoretical Guarantees: Sliver Removal Surface Reconstruction
  • 42. Local Refinement Algorithms Pros: Easy to implement Often Parallel
  • 43. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons:
  • 44. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination?
  • 45. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations?
  • 46. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations?
  • 47. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations?
  • 48. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations? How many points?
  • 49. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Accumulations? How many points?
  • 50. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Yes. Accumulations? How many points?
  • 51. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Yes. Accumulations? No. How many points?
  • 52. Local Refinement Algorithms Pros: Easy to implement Often Parallel Cons: Termination? Yes. Accumulations? No. How many points? This is what we’ll answer.
  • 53. The size of an optimal mesh is given by the feature size measure.
  • 54. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P .
  • 55. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P .
  • 56. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P .
  • 57. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . x
  • 58. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . s( x) x lf
  • 59. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . x
  • 60. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . x lfs( x)
  • 61. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . dx Optimal Mesh Size = Θ Ω lfs(x)d x lfs( x)
  • 62. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d x lfs( x)
  • 63. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d hides simple exponential in d x lfs( x)
  • 64. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d hides simple exponential in d dx The Feature Size Measure: µP (Ω) = lfsP (x)d Ω x lfs( x)
  • 65. The size of an optimal mesh is given by the feature size measure. lfsP (x) := Distance to second nearest neighbor in P . number of vertices dx Optimal Mesh Size = Θ Ω lfs(x)d hides simple exponential in d dx The Feature Size Measure: µP (Ω) = lfsP (x)d Ω When is µP (Ω) = O(n)?
  • 66. A canonical bad case for meshing is two points in a big empty space.
  • 67. A canonical bad case for meshing is two points in a big empty space.
  • 68. A canonical bad case for meshing is two points in a big empty space.
  • 69. A canonical bad case for meshing is two points in a big empty space.
  • 70. A canonical bad case for meshing is two points in a big empty space.
  • 71. A canonical bad case for meshing is two points in a big empty space.
  • 72. A canonical bad case for meshing is two points in a big empty space.
  • 73. A canonical bad case for meshing is two points in a big empty space.
  • 74. A canonical bad case for meshing is two points in a big empty space.
  • 75. A canonical bad case for meshing is two points in a big empty space.
  • 76. The feature size measure can be bounded in terms of the pacing.
  • 77. The feature size measure can be bounded in terms of the pacing. Order the points.
  • 78. The feature size measure can be bounded in terms of the pacing. Order the points.
  • 79. The feature size measure can be bounded in terms of the pacing. Order the points.
  • 80. The feature size measure can be bounded in terms of the pacing. Order the points.
  • 81. The feature size measure can be bounded in terms of the pacing. Order the points.
  • 82. The feature size measure can be bounded in terms of the pacing. Order the points.
  • 83. The feature size measure can be bounded in terms of the pacing. pi Order the points.
  • 84. The feature size measure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi )
  • 85. The feature size measure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi )
  • 86. The feature size measure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi ) b The pacing of the ith point is φi = a .
  • 87. The feature size measure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi ) b The pacing of the ith point is φi = a . Let φ be the geometric mean, so log φi = n log φ.
  • 88. The feature size measure can be bounded in terms of the pacing. pi Order the points. a = pi − NN(pi ) b = pi − 2NN(pi ) b The pacing of the ith point is φi = a . Let φ be the geometric mean, so log φi = n log φ. φ is the pacing of the ordering.
  • 89. The trick is to write the feature size measure as a telescoping sum.
  • 90. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi }
  • 91. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3
  • 92. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point.
  • 93. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi )
  • 94. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi ) n log φi = n log φ i=3
  • 95. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi ) n log φi = n log φ Θ(n + n log φ) i=3
  • 96. The trick is to write the feature size measure as a telescoping sum. Pi = {p1 , . . . , pi } n µ P = µ P2 + µPi − µPi−1 i=3 effect of adding the ith point. µPi (Ω) − µPi−1 (Ω) = Θ(1 + log φi ) n log φi = n log φ Θ(n + n log φ) i=3 d Previous bound: O(n + φ n).
  • 97. Pacing analysis has already led to new results.
  • 98. Pacing analysis has already led to new results. The Scaffold Theorem (SODA 2009) Given n points well-spaced on a surface, the volume mesh has size O(n).
  • 99. Pacing analysis has already led to new results. The Scaffold Theorem (SODA 2009) Given n points well-spaced on a surface, the volume mesh has size O(n). Time-Optimal Point Meshing (SoCG 2011) Build a mesh in O(n log n + m) time. Algorithm explicitly computes the pacing for each insertion.
  • 101. Some takeaway messages: 1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point.
  • 102. Some takeaway messages: 1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point. 2 Point sets that admit linear size meshes are exactly those with constant pacing.
  • 103. Some takeaway messages: 1 The amortized change in the number of vertices in a mesh as a result of adding one new point is determined by the pacing of that point. 2 Point sets that admit linear size meshes are exactly those with constant pacing. Thank you.
  • 104.
  • 105.
  • 107. Mesh Generation 13 Decompose a domain into simple elements.
  • 108. Mesh Generation 13 Decompose a domain into simple elements.
  • 109. Mesh Generation 13 Decompose a domain Mesh Quality into simple elements. Radius/Edge < const
  • 110. Mesh Generation 13 Decompose a domain Mesh Quality into simple elements. X ✓ X Radius/Edge < const
  • 111. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 112. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 113. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 114. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const
  • 115. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const Voronoi Diagram
  • 116. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const Voronoi Diagram OutRadius/InRadius < const
  • 117. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const X ✓ Voronoi Diagram OutRadius/InRadius < const
  • 118. Mesh Generation 13 Decompose a domain Mesh Quality Conforming to Input into simple elements. X ✓ X Radius/Edge < const X ✓ Voronoi Diagram OutRadius/InRadius < const
  • 119. Optimal meshing adds the fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 120. Optimal meshing adds the fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 121. Optimal meshing adds the fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 122. Optimal meshing adds the fewest points to make all Voronoi cells fat.* * Equivalent to radius-edge condition on Delaunay simplices.
  • 123. Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 124. Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 125. Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 126. Meshing Points 15 Input: P ⊂ Rd Output: M ⊃ P with a “nice” Voronoi diagram n = |P |, m = |M |
  • 127. How to prove a meshing algorithm is optimal.16
  • 128. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
  • 129. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
  • 130. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P x
  • 131. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P x fP (x)
  • 132. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P x
  • 133. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P ( x) x fP
  • 134. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P
  • 135. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d
  • 136. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output”
  • 137. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output” v
  • 138. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output” v fM (v)
  • 139. How to prove a meshing algorithm is optimal.16 The Ruppert Feature Size: fP (x) := distance to 2nd NN of x in P dx For all v ∈ M, fM (v) ≥ KfP (v) m=Θ Ω fP (x)d “No 2 points too close together” “Optimal Size Output” v ( v) fM (v) fP